00:00:00.000 Started by upstream project "autotest-nightly" build number 4286 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3649 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.272 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.273 The recommended git tool is: git 00:00:00.273 using credential 00000000-0000-0000-0000-000000000002 00:00:00.274 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.313 Fetching changes from the remote Git repository 00:00:00.315 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.351 Using shallow fetch with depth 1 00:00:00.351 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.351 > git --version # timeout=10 00:00:00.379 > git --version # 'git version 2.39.2' 00:00:00.379 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.393 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.393 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.357 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.370 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.381 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.381 > git config core.sparsecheckout # timeout=10 00:00:06.391 > git read-tree -mu HEAD # timeout=10 00:00:06.407 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.427 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.427 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.517 [Pipeline] Start of Pipeline 00:00:06.528 [Pipeline] library 00:00:06.529 Loading library shm_lib@master 00:00:06.529 Library shm_lib@master is cached. Copying from home. 00:00:06.544 [Pipeline] node 00:00:06.559 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.562 [Pipeline] { 00:00:06.570 [Pipeline] catchError 00:00:06.571 [Pipeline] { 00:00:06.581 [Pipeline] wrap 00:00:06.588 [Pipeline] { 00:00:06.596 [Pipeline] stage 00:00:06.598 [Pipeline] { (Prologue) 00:00:06.615 [Pipeline] echo 00:00:06.616 Node: VM-host-WFP1 00:00:06.622 [Pipeline] cleanWs 00:00:06.632 [WS-CLEANUP] Deleting project workspace... 00:00:06.632 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.637 [WS-CLEANUP] done 00:00:06.844 [Pipeline] setCustomBuildProperty 00:00:06.926 [Pipeline] httpRequest 00:00:07.567 [Pipeline] echo 00:00:07.569 Sorcerer 10.211.164.20 is alive 00:00:07.578 [Pipeline] retry 00:00:07.580 [Pipeline] { 00:00:07.594 [Pipeline] httpRequest 00:00:07.598 HttpMethod: GET 00:00:07.599 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.599 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.600 Response Code: HTTP/1.1 200 OK 00:00:07.601 Success: Status code 200 is in the accepted range: 200,404 00:00:07.601 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.784 [Pipeline] } 00:00:08.801 [Pipeline] // retry 00:00:08.810 [Pipeline] sh 00:00:09.092 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.106 [Pipeline] httpRequest 00:00:09.460 [Pipeline] echo 00:00:09.462 Sorcerer 10.211.164.20 is alive 00:00:09.473 [Pipeline] retry 00:00:09.476 [Pipeline] { 00:00:09.493 [Pipeline] httpRequest 00:00:09.498 HttpMethod: GET 00:00:09.499 URL: http://10.211.164.20/packages/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz 00:00:09.500 Sending request to url: http://10.211.164.20/packages/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz 00:00:09.517 Response Code: HTTP/1.1 200 OK 00:00:09.518 Success: Status code 200 is in the accepted range: 200,404 00:00:09.519 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz 00:00:47.960 [Pipeline] } 00:00:47.978 [Pipeline] // retry 00:00:47.986 [Pipeline] sh 00:00:48.267 + tar --no-same-owner -xf spdk_a5dab6cf7998a288aafc8366202b334b4ac5d08c.tar.gz 00:00:50.821 [Pipeline] sh 00:00:51.106 + git -C spdk log --oneline -n5 00:00:51.106 a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used 00:00:51.106 876509865 test/nvme/xnvme: Test all conserve_cpu variants 00:00:51.107 a25b16198 test/nvme/xnvme: Enable polling in nvme driver 00:00:51.107 bb53e3ad9 test/nvme/xnvme: Drop null_blk 00:00:51.107 ace52fb4b test/nvme/xnvme: Tidy the test suite 00:00:51.128 [Pipeline] writeFile 00:00:51.144 [Pipeline] sh 00:00:51.429 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:51.441 [Pipeline] sh 00:00:51.723 + cat autorun-spdk.conf 00:00:51.723 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.723 SPDK_TEST_NVME=1 00:00:51.723 SPDK_TEST_FTL=1 00:00:51.723 SPDK_TEST_ISAL=1 00:00:51.723 SPDK_RUN_ASAN=1 00:00:51.723 SPDK_RUN_UBSAN=1 00:00:51.723 SPDK_TEST_XNVME=1 00:00:51.723 SPDK_TEST_NVME_FDP=1 00:00:51.723 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:51.730 RUN_NIGHTLY=1 00:00:51.732 [Pipeline] } 00:00:51.746 [Pipeline] // stage 00:00:51.763 [Pipeline] stage 00:00:51.765 [Pipeline] { (Run VM) 00:00:51.778 [Pipeline] sh 00:00:52.060 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:52.060 + echo 'Start stage prepare_nvme.sh' 00:00:52.060 Start stage prepare_nvme.sh 00:00:52.060 + [[ -n 7 ]] 00:00:52.060 + disk_prefix=ex7 00:00:52.060 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:52.060 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:52.060 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:52.060 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.060 ++ SPDK_TEST_NVME=1 00:00:52.060 ++ SPDK_TEST_FTL=1 00:00:52.060 ++ SPDK_TEST_ISAL=1 00:00:52.060 ++ SPDK_RUN_ASAN=1 00:00:52.060 ++ SPDK_RUN_UBSAN=1 00:00:52.060 ++ SPDK_TEST_XNVME=1 00:00:52.060 ++ SPDK_TEST_NVME_FDP=1 00:00:52.060 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.060 ++ RUN_NIGHTLY=1 00:00:52.060 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:52.060 + nvme_files=() 00:00:52.060 + declare -A nvme_files 00:00:52.060 + backend_dir=/var/lib/libvirt/images/backends 00:00:52.060 + nvme_files['nvme.img']=5G 00:00:52.060 + nvme_files['nvme-cmb.img']=5G 00:00:52.060 + nvme_files['nvme-multi0.img']=4G 00:00:52.060 + nvme_files['nvme-multi1.img']=4G 00:00:52.060 + nvme_files['nvme-multi2.img']=4G 00:00:52.060 + nvme_files['nvme-openstack.img']=8G 00:00:52.060 + nvme_files['nvme-zns.img']=5G 00:00:52.060 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:52.060 + (( SPDK_TEST_FTL == 1 )) 00:00:52.060 + nvme_files["nvme-ftl.img"]=6G 00:00:52.060 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:52.060 + nvme_files["nvme-fdp.img"]=1G 00:00:52.060 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:52.060 + for nvme in "${!nvme_files[@]}" 00:00:52.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:52.060 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.060 + for nvme in "${!nvme_files[@]}" 00:00:52.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:00:52.060 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:52.060 + for nvme in "${!nvme_files[@]}" 00:00:52.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:52.060 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.060 + for nvme in "${!nvme_files[@]}" 00:00:52.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:52.322 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:52.322 + for nvme in "${!nvme_files[@]}" 00:00:52.322 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:52.891 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.892 + for nvme in "${!nvme_files[@]}" 00:00:52.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:52.892 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.892 + for nvme in "${!nvme_files[@]}" 00:00:52.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:52.892 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.892 + for nvme in "${!nvme_files[@]}" 00:00:52.892 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:00:53.151 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:53.151 + for nvme in "${!nvme_files[@]}" 00:00:53.151 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:53.719 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.719 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:53.719 + echo 'End stage prepare_nvme.sh' 00:00:53.719 End stage prepare_nvme.sh 00:00:53.732 [Pipeline] sh 00:00:54.015 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:54.015 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:54.015 00:00:54.015 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:54.015 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:54.015 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:54.015 HELP=0 00:00:54.015 DRY_RUN=0 00:00:54.015 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:00:54.016 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:54.016 NVME_AUTO_CREATE=0 00:00:54.016 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:00:54.016 NVME_CMB=,,,, 00:00:54.016 NVME_PMR=,,,, 00:00:54.016 NVME_ZNS=,,,, 00:00:54.016 NVME_MS=true,,,, 00:00:54.016 NVME_FDP=,,,on, 00:00:54.016 SPDK_VAGRANT_DISTRO=fedora39 00:00:54.016 SPDK_VAGRANT_VMCPU=10 00:00:54.016 SPDK_VAGRANT_VMRAM=12288 00:00:54.016 SPDK_VAGRANT_PROVIDER=libvirt 00:00:54.016 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:54.016 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:54.016 SPDK_OPENSTACK_NETWORK=0 00:00:54.016 VAGRANT_PACKAGE_BOX=0 00:00:54.016 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:54.016 FORCE_DISTRO=true 00:00:54.016 VAGRANT_BOX_VERSION= 00:00:54.016 EXTRA_VAGRANTFILES= 00:00:54.016 NIC_MODEL=e1000 00:00:54.016 00:00:54.016 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:54.016 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:56.552 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.488 ==> default: Creating image (snapshot of base box volume). 00:00:57.747 ==> default: Creating domain with the following settings... 00:00:57.747 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732099066_b841b092d5cc3d4dece6 00:00:57.747 ==> default: -- Domain type: kvm 00:00:57.747 ==> default: -- Cpus: 10 00:00:57.747 ==> default: -- Feature: acpi 00:00:57.747 ==> default: -- Feature: apic 00:00:57.747 ==> default: -- Feature: pae 00:00:57.747 ==> default: -- Memory: 12288M 00:00:57.747 ==> default: -- Memory Backing: hugepages: 00:00:57.747 ==> default: -- Management MAC: 00:00:57.747 ==> default: -- Loader: 00:00:57.747 ==> default: -- Nvram: 00:00:57.747 ==> default: -- Base box: spdk/fedora39 00:00:57.747 ==> default: -- Storage pool: default 00:00:57.747 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732099066_b841b092d5cc3d4dece6.img (20G) 00:00:57.747 ==> default: -- Volume Cache: default 00:00:57.747 ==> default: -- Kernel: 00:00:57.747 ==> default: -- Initrd: 00:00:57.747 ==> default: -- Graphics Type: vnc 00:00:57.747 ==> default: -- Graphics Port: -1 00:00:57.747 ==> default: -- Graphics IP: 127.0.0.1 00:00:57.747 ==> default: -- Graphics Password: Not defined 00:00:57.747 ==> default: -- Video Type: cirrus 00:00:57.747 ==> default: -- Video VRAM: 9216 00:00:57.747 ==> default: -- Sound Type: 00:00:57.747 ==> default: -- Keymap: en-us 00:00:57.747 ==> default: -- TPM Path: 00:00:57.747 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:57.747 ==> default: -- Command line args: 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:57.747 ==> default: -> value=-drive, 00:00:57.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:57.747 ==> default: -> value=-drive, 00:00:57.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:57.747 ==> default: -> value=-drive, 00:00:57.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.747 ==> default: -> value=-drive, 00:00:57.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.747 ==> default: -> value=-drive, 00:00:57.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:57.747 ==> default: -> value=-drive, 00:00:57.747 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:57.747 ==> default: -> value=-device, 00:00:57.747 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.314 ==> default: Creating shared folders metadata... 00:00:58.314 ==> default: Starting domain. 00:01:00.850 ==> default: Waiting for domain to get an IP address... 00:01:15.743 ==> default: Waiting for SSH to become available... 00:01:17.647 ==> default: Configuring and enabling network interfaces... 00:01:22.917 default: SSH address: 192.168.121.64:22 00:01:22.917 default: SSH username: vagrant 00:01:22.917 default: SSH auth method: private key 00:01:25.450 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:35.431 ==> default: Mounting SSHFS shared folder... 00:01:36.398 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:36.398 ==> default: Checking Mount.. 00:01:38.323 ==> default: Folder Successfully Mounted! 00:01:38.323 ==> default: Running provisioner: file... 00:01:39.259 default: ~/.gitconfig => .gitconfig 00:01:39.517 00:01:39.517 SUCCESS! 00:01:39.517 00:01:39.517 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:39.517 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:39.517 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:39.517 00:01:39.526 [Pipeline] } 00:01:39.542 [Pipeline] // stage 00:01:39.552 [Pipeline] dir 00:01:39.552 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:39.554 [Pipeline] { 00:01:39.570 [Pipeline] catchError 00:01:39.572 [Pipeline] { 00:01:39.585 [Pipeline] sh 00:01:39.866 + vagrant ssh-config --host vagrant 00:01:39.866 + sed -ne /^Host/,$p 00:01:39.866 + tee ssh_conf 00:01:43.153 Host vagrant 00:01:43.153 HostName 192.168.121.64 00:01:43.153 User vagrant 00:01:43.153 Port 22 00:01:43.153 UserKnownHostsFile /dev/null 00:01:43.153 StrictHostKeyChecking no 00:01:43.153 PasswordAuthentication no 00:01:43.153 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:43.153 IdentitiesOnly yes 00:01:43.153 LogLevel FATAL 00:01:43.153 ForwardAgent yes 00:01:43.153 ForwardX11 yes 00:01:43.153 00:01:43.165 [Pipeline] withEnv 00:01:43.167 [Pipeline] { 00:01:43.246 [Pipeline] sh 00:01:43.524 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:43.524 source /etc/os-release 00:01:43.524 [[ -e /image.version ]] && img=$(< /image.version) 00:01:43.524 # Minimal, systemd-like check. 00:01:43.524 if [[ -e /.dockerenv ]]; then 00:01:43.524 # Clear garbage from the node's name: 00:01:43.524 # agt-er_autotest_547-896 -> autotest_547-896 00:01:43.524 # $HOSTNAME is the actual container id 00:01:43.524 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:43.524 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:43.524 # We can assume this is a mount from a host where container is running, 00:01:43.524 # so fetch its hostname to easily identify the target swarm worker. 00:01:43.524 container="$(< /etc/hostname) ($agent)" 00:01:43.524 else 00:01:43.524 # Fallback 00:01:43.524 container=$agent 00:01:43.524 fi 00:01:43.524 fi 00:01:43.524 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:43.524 00:01:43.793 [Pipeline] } 00:01:43.807 [Pipeline] // withEnv 00:01:43.813 [Pipeline] setCustomBuildProperty 00:01:43.825 [Pipeline] stage 00:01:43.827 [Pipeline] { (Tests) 00:01:43.841 [Pipeline] sh 00:01:44.120 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:44.391 [Pipeline] sh 00:01:44.728 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:45.002 [Pipeline] timeout 00:01:45.003 Timeout set to expire in 50 min 00:01:45.005 [Pipeline] { 00:01:45.018 [Pipeline] sh 00:01:45.297 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:45.864 HEAD is now at a5dab6cf7 test/nvme/xnvme: Make sure nvme selected for tests is not used 00:01:45.875 [Pipeline] sh 00:01:46.151 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:46.426 [Pipeline] sh 00:01:46.712 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:46.986 [Pipeline] sh 00:01:47.266 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:47.524 ++ readlink -f spdk_repo 00:01:47.524 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:47.524 + [[ -n /home/vagrant/spdk_repo ]] 00:01:47.524 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:47.524 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:47.524 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:47.524 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:47.524 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:47.524 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:47.524 + cd /home/vagrant/spdk_repo 00:01:47.524 + source /etc/os-release 00:01:47.524 ++ NAME='Fedora Linux' 00:01:47.524 ++ VERSION='39 (Cloud Edition)' 00:01:47.524 ++ ID=fedora 00:01:47.524 ++ VERSION_ID=39 00:01:47.524 ++ VERSION_CODENAME= 00:01:47.524 ++ PLATFORM_ID=platform:f39 00:01:47.524 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:47.524 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:47.524 ++ LOGO=fedora-logo-icon 00:01:47.524 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:47.524 ++ HOME_URL=https://fedoraproject.org/ 00:01:47.524 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:47.524 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:47.524 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:47.524 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:47.524 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:47.524 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:47.524 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:47.524 ++ SUPPORT_END=2024-11-12 00:01:47.524 ++ VARIANT='Cloud Edition' 00:01:47.524 ++ VARIANT_ID=cloud 00:01:47.524 + uname -a 00:01:47.524 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:47.524 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:48.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:48.350 Hugepages 00:01:48.350 node hugesize free / total 00:01:48.350 node0 1048576kB 0 / 0 00:01:48.350 node0 2048kB 0 / 0 00:01:48.350 00:01:48.350 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:48.350 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:48.350 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:48.350 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:48.350 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:01:48.350 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:48.350 + rm -f /tmp/spdk-ld-path 00:01:48.350 + source autorun-spdk.conf 00:01:48.350 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.350 ++ SPDK_TEST_NVME=1 00:01:48.350 ++ SPDK_TEST_FTL=1 00:01:48.350 ++ SPDK_TEST_ISAL=1 00:01:48.350 ++ SPDK_RUN_ASAN=1 00:01:48.350 ++ SPDK_RUN_UBSAN=1 00:01:48.350 ++ SPDK_TEST_XNVME=1 00:01:48.350 ++ SPDK_TEST_NVME_FDP=1 00:01:48.350 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.350 ++ RUN_NIGHTLY=1 00:01:48.350 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:48.350 + [[ -n '' ]] 00:01:48.350 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:48.608 + for M in /var/spdk/build-*-manifest.txt 00:01:48.608 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:48.608 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.608 + for M in /var/spdk/build-*-manifest.txt 00:01:48.608 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:48.608 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.608 + for M in /var/spdk/build-*-manifest.txt 00:01:48.608 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:48.608 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:48.608 ++ uname 00:01:48.608 + [[ Linux == \L\i\n\u\x ]] 00:01:48.608 + sudo dmesg -T 00:01:48.609 + sudo dmesg --clear 00:01:48.609 + dmesg_pid=5245 00:01:48.609 + [[ Fedora Linux == FreeBSD ]] 00:01:48.609 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:48.609 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:48.609 + sudo dmesg -Tw 00:01:48.609 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:48.609 + [[ -x /usr/src/fio-static/fio ]] 00:01:48.609 + export FIO_BIN=/usr/src/fio-static/fio 00:01:48.609 + FIO_BIN=/usr/src/fio-static/fio 00:01:48.609 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:48.609 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:48.609 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:48.609 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:48.609 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:48.609 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:48.609 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:48.609 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:48.609 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.609 10:38:37 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:48.609 10:38:37 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.609 10:38:37 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:48.609 10:38:37 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:48.609 10:38:37 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:48.867 10:38:37 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:48.867 10:38:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:48.867 10:38:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:48.867 10:38:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:48.867 10:38:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:48.867 10:38:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:48.867 10:38:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.867 10:38:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.867 10:38:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.867 10:38:37 -- paths/export.sh@5 -- $ export PATH 00:01:48.867 10:38:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:48.867 10:38:37 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:48.867 10:38:37 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:48.867 10:38:37 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732099117.XXXXXX 00:01:48.867 10:38:37 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732099117.ATHstJ 00:01:48.867 10:38:37 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:48.867 10:38:37 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:48.867 10:38:37 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:48.867 10:38:37 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:48.867 10:38:37 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:48.867 10:38:37 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:48.867 10:38:37 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:48.867 10:38:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.867 10:38:37 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:48.867 10:38:37 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:48.867 10:38:37 -- pm/common@17 -- $ local monitor 00:01:48.867 10:38:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.867 10:38:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:48.867 10:38:37 -- pm/common@25 -- $ sleep 1 00:01:48.867 10:38:37 -- pm/common@21 -- $ date +%s 00:01:48.867 10:38:37 -- pm/common@21 -- $ date +%s 00:01:48.867 10:38:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732099117 00:01:48.867 10:38:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732099117 00:01:48.867 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732099117_collect-vmstat.pm.log 00:01:48.867 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732099117_collect-cpu-load.pm.log 00:01:49.805 10:38:38 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:49.805 10:38:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:49.805 10:38:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:49.805 10:38:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:49.805 10:38:38 -- spdk/autobuild.sh@16 -- $ date -u 00:01:49.805 Wed Nov 20 10:38:39 AM UTC 2024 00:01:49.805 10:38:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:49.805 v25.01-pre-212-ga5dab6cf7 00:01:49.805 10:38:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:49.805 10:38:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:49.805 10:38:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:49.805 10:38:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:49.805 10:38:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.805 ************************************ 00:01:49.805 START TEST asan 00:01:49.805 ************************************ 00:01:49.805 using asan 00:01:49.805 10:38:39 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:49.805 00:01:49.805 real 0m0.001s 00:01:49.805 user 0m0.001s 00:01:49.805 sys 0m0.000s 00:01:49.805 10:38:39 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:49.805 10:38:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:49.805 ************************************ 00:01:49.805 END TEST asan 00:01:49.805 ************************************ 00:01:50.064 10:38:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.064 10:38:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.064 10:38:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:50.064 10:38:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.064 10:38:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.064 ************************************ 00:01:50.064 START TEST ubsan 00:01:50.064 ************************************ 00:01:50.064 using ubsan 00:01:50.064 10:38:39 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:50.064 00:01:50.064 real 0m0.000s 00:01:50.064 user 0m0.000s 00:01:50.064 sys 0m0.000s 00:01:50.064 10:38:39 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:50.064 10:38:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.064 ************************************ 00:01:50.064 END TEST ubsan 00:01:50.064 ************************************ 00:01:50.064 10:38:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:50.064 10:38:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:50.064 10:38:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:50.064 10:38:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:50.065 10:38:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:50.065 10:38:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:50.065 10:38:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:50.065 10:38:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:50.065 10:38:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:50.065 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:50.065 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:50.633 Using 'verbs' RDMA provider 00:02:06.882 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:21.808 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:22.375 Creating mk/config.mk...done. 00:02:22.375 Creating mk/cc.flags.mk...done. 00:02:22.375 Type 'make' to build. 00:02:22.375 10:39:11 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:22.375 10:39:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:22.375 10:39:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:22.375 10:39:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.375 ************************************ 00:02:22.375 START TEST make 00:02:22.375 ************************************ 00:02:22.375 10:39:11 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:22.942 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:22.942 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:22.942 meson setup builddir \ 00:02:22.942 -Dwith-libaio=enabled \ 00:02:22.942 -Dwith-liburing=enabled \ 00:02:22.942 -Dwith-libvfn=disabled \ 00:02:22.942 -Dwith-spdk=disabled \ 00:02:22.942 -Dexamples=false \ 00:02:22.942 -Dtests=false \ 00:02:22.942 -Dtools=false && \ 00:02:22.942 meson compile -C builddir && \ 00:02:22.942 cd -) 00:02:22.942 make[1]: Nothing to be done for 'all'. 00:02:25.478 The Meson build system 00:02:25.478 Version: 1.5.0 00:02:25.478 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:25.478 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:25.478 Build type: native build 00:02:25.478 Project name: xnvme 00:02:25.478 Project version: 0.7.5 00:02:25.478 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:25.478 C linker for the host machine: cc ld.bfd 2.40-14 00:02:25.478 Host machine cpu family: x86_64 00:02:25.478 Host machine cpu: x86_64 00:02:25.478 Message: host_machine.system: linux 00:02:25.478 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:25.478 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:25.478 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:25.478 Run-time dependency threads found: YES 00:02:25.478 Has header "setupapi.h" : NO 00:02:25.478 Has header "linux/blkzoned.h" : YES 00:02:25.478 Has header "linux/blkzoned.h" : YES (cached) 00:02:25.478 Has header "libaio.h" : YES 00:02:25.478 Library aio found: YES 00:02:25.478 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:25.478 Run-time dependency liburing found: YES 2.2 00:02:25.478 Dependency libvfn skipped: feature with-libvfn disabled 00:02:25.478 Found CMake: /usr/bin/cmake (3.27.7) 00:02:25.478 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:25.478 Subproject spdk : skipped: feature with-spdk disabled 00:02:25.478 Run-time dependency appleframeworks found: NO (tried framework) 00:02:25.478 Run-time dependency appleframeworks found: NO (tried framework) 00:02:25.478 Library rt found: YES 00:02:25.478 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:25.478 Configuring xnvme_config.h using configuration 00:02:25.478 Configuring xnvme.spec using configuration 00:02:25.478 Run-time dependency bash-completion found: YES 2.11 00:02:25.478 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:25.478 Program cp found: YES (/usr/bin/cp) 00:02:25.478 Build targets in project: 3 00:02:25.478 00:02:25.478 xnvme 0.7.5 00:02:25.478 00:02:25.478 Subprojects 00:02:25.478 spdk : NO Feature 'with-spdk' disabled 00:02:25.478 00:02:25.478 User defined options 00:02:25.478 examples : false 00:02:25.478 tests : false 00:02:25.478 tools : false 00:02:25.478 with-libaio : enabled 00:02:25.478 with-liburing: enabled 00:02:25.478 with-libvfn : disabled 00:02:25.478 with-spdk : disabled 00:02:25.478 00:02:25.478 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.478 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:25.478 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:25.478 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:25.478 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:25.478 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:25.478 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:25.478 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:25.478 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:25.478 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:25.478 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:25.478 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:25.478 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:25.737 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:25.737 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:25.737 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:25.737 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:25.737 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:25.737 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:25.737 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:25.737 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:25.737 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:25.737 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:25.737 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:25.737 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:25.737 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:25.737 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:25.737 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:25.737 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:25.737 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:25.737 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:25.737 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:25.737 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:25.737 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:25.737 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:25.737 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:25.737 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:25.737 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:25.737 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:25.737 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:25.737 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:25.737 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:25.996 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:25.996 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:25.996 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:25.996 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:25.996 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:25.996 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:25.996 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:25.996 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:25.996 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:25.996 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:25.996 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:25.996 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:25.996 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:25.996 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:25.996 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:25.996 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:25.996 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:25.996 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:25.996 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:25.996 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:25.996 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:25.996 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:25.996 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:25.996 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:25.996 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:26.255 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:26.255 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:26.255 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:26.255 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:26.255 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:26.255 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:26.255 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:26.255 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:26.514 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:26.514 [75/76] Linking static target lib/libxnvme.a 00:02:26.514 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:26.514 INFO: autodetecting backend as ninja 00:02:26.514 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:26.773 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:34.900 The Meson build system 00:02:34.900 Version: 1.5.0 00:02:34.900 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.900 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.900 Build type: native build 00:02:34.900 Program cat found: YES (/usr/bin/cat) 00:02:34.900 Project name: DPDK 00:02:34.900 Project version: 24.03.0 00:02:34.900 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.900 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.900 Host machine cpu family: x86_64 00:02:34.900 Host machine cpu: x86_64 00:02:34.900 Message: ## Building in Developer Mode ## 00:02:34.900 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.900 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.900 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.900 Program python3 found: YES (/usr/bin/python3) 00:02:34.900 Program cat found: YES (/usr/bin/cat) 00:02:34.900 Compiler for C supports arguments -march=native: YES 00:02:34.900 Checking for size of "void *" : 8 00:02:34.900 Checking for size of "void *" : 8 (cached) 00:02:34.900 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:34.900 Library m found: YES 00:02:34.900 Library numa found: YES 00:02:34.900 Has header "numaif.h" : YES 00:02:34.900 Library fdt found: NO 00:02:34.900 Library execinfo found: NO 00:02:34.900 Has header "execinfo.h" : YES 00:02:34.900 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.900 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.900 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.900 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.900 Run-time dependency openssl found: YES 3.1.1 00:02:34.900 Run-time dependency libpcap found: YES 1.10.4 00:02:34.900 Has header "pcap.h" with dependency libpcap: YES 00:02:34.900 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.900 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.900 Compiler for C supports arguments -Wformat: YES 00:02:34.900 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.900 Compiler for C supports arguments -Wformat-security: NO 00:02:34.900 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.900 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.900 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.900 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.900 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.900 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.900 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.900 Compiler for C supports arguments -Wundef: YES 00:02:34.900 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.900 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.901 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.901 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.901 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.901 Program objdump found: YES (/usr/bin/objdump) 00:02:34.901 Compiler for C supports arguments -mavx512f: YES 00:02:34.901 Checking if "AVX512 checking" compiles: YES 00:02:34.901 Fetching value of define "__SSE4_2__" : 1 00:02:34.901 Fetching value of define "__AES__" : 1 00:02:34.901 Fetching value of define "__AVX__" : 1 00:02:34.901 Fetching value of define "__AVX2__" : 1 00:02:34.901 Fetching value of define "__AVX512BW__" : 1 00:02:34.901 Fetching value of define "__AVX512CD__" : 1 00:02:34.901 Fetching value of define "__AVX512DQ__" : 1 00:02:34.901 Fetching value of define "__AVX512F__" : 1 00:02:34.901 Fetching value of define "__AVX512VL__" : 1 00:02:34.901 Fetching value of define "__PCLMUL__" : 1 00:02:34.901 Fetching value of define "__RDRND__" : 1 00:02:34.901 Fetching value of define "__RDSEED__" : 1 00:02:34.901 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.901 Fetching value of define "__znver1__" : (undefined) 00:02:34.901 Fetching value of define "__znver2__" : (undefined) 00:02:34.901 Fetching value of define "__znver3__" : (undefined) 00:02:34.901 Fetching value of define "__znver4__" : (undefined) 00:02:34.901 Library asan found: YES 00:02:34.901 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.901 Message: lib/log: Defining dependency "log" 00:02:34.901 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.901 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.901 Library rt found: YES 00:02:34.901 Checking for function "getentropy" : NO 00:02:34.901 Message: lib/eal: Defining dependency "eal" 00:02:34.901 Message: lib/ring: Defining dependency "ring" 00:02:34.901 Message: lib/rcu: Defining dependency "rcu" 00:02:34.901 Message: lib/mempool: Defining dependency "mempool" 00:02:34.901 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.901 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.901 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.901 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.901 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.901 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.901 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:34.901 Compiler for C supports arguments -mpclmul: YES 00:02:34.901 Compiler for C supports arguments -maes: YES 00:02:34.901 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.901 Compiler for C supports arguments -mavx512bw: YES 00:02:34.901 Compiler for C supports arguments -mavx512dq: YES 00:02:34.901 Compiler for C supports arguments -mavx512vl: YES 00:02:34.901 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.901 Compiler for C supports arguments -mavx2: YES 00:02:34.901 Compiler for C supports arguments -mavx: YES 00:02:34.901 Message: lib/net: Defining dependency "net" 00:02:34.901 Message: lib/meter: Defining dependency "meter" 00:02:34.901 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.901 Message: lib/pci: Defining dependency "pci" 00:02:34.901 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.901 Message: lib/hash: Defining dependency "hash" 00:02:34.901 Message: lib/timer: Defining dependency "timer" 00:02:34.901 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.901 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.901 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.901 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.901 Message: lib/power: Defining dependency "power" 00:02:34.901 Message: lib/reorder: Defining dependency "reorder" 00:02:34.901 Message: lib/security: Defining dependency "security" 00:02:34.901 Has header "linux/userfaultfd.h" : YES 00:02:34.901 Has header "linux/vduse.h" : YES 00:02:34.901 Message: lib/vhost: Defining dependency "vhost" 00:02:34.901 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.901 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.901 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.901 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.901 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.901 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.901 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.901 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.901 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.901 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.901 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.901 Configuring doxy-api-html.conf using configuration 00:02:34.901 Configuring doxy-api-man.conf using configuration 00:02:34.901 Program mandb found: YES (/usr/bin/mandb) 00:02:34.901 Program sphinx-build found: NO 00:02:34.901 Configuring rte_build_config.h using configuration 00:02:34.901 Message: 00:02:34.901 ================= 00:02:34.901 Applications Enabled 00:02:34.901 ================= 00:02:34.901 00:02:34.901 apps: 00:02:34.901 00:02:34.901 00:02:34.901 Message: 00:02:34.901 ================= 00:02:34.901 Libraries Enabled 00:02:34.901 ================= 00:02:34.901 00:02:34.901 libs: 00:02:34.901 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.901 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.901 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.901 00:02:34.901 Message: 00:02:34.901 =============== 00:02:34.901 Drivers Enabled 00:02:34.901 =============== 00:02:34.901 00:02:34.901 common: 00:02:34.901 00:02:34.901 bus: 00:02:34.901 pci, vdev, 00:02:34.901 mempool: 00:02:34.901 ring, 00:02:34.901 dma: 00:02:34.901 00:02:34.901 net: 00:02:34.901 00:02:34.901 crypto: 00:02:34.901 00:02:34.901 compress: 00:02:34.901 00:02:34.901 vdpa: 00:02:34.901 00:02:34.901 00:02:34.901 Message: 00:02:34.901 ================= 00:02:34.901 Content Skipped 00:02:34.901 ================= 00:02:34.901 00:02:34.901 apps: 00:02:34.901 dumpcap: explicitly disabled via build config 00:02:34.901 graph: explicitly disabled via build config 00:02:34.901 pdump: explicitly disabled via build config 00:02:34.901 proc-info: explicitly disabled via build config 00:02:34.901 test-acl: explicitly disabled via build config 00:02:34.901 test-bbdev: explicitly disabled via build config 00:02:34.901 test-cmdline: explicitly disabled via build config 00:02:34.901 test-compress-perf: explicitly disabled via build config 00:02:34.901 test-crypto-perf: explicitly disabled via build config 00:02:34.901 test-dma-perf: explicitly disabled via build config 00:02:34.901 test-eventdev: explicitly disabled via build config 00:02:34.901 test-fib: explicitly disabled via build config 00:02:34.901 test-flow-perf: explicitly disabled via build config 00:02:34.901 test-gpudev: explicitly disabled via build config 00:02:34.901 test-mldev: explicitly disabled via build config 00:02:34.901 test-pipeline: explicitly disabled via build config 00:02:34.901 test-pmd: explicitly disabled via build config 00:02:34.901 test-regex: explicitly disabled via build config 00:02:34.901 test-sad: explicitly disabled via build config 00:02:34.901 test-security-perf: explicitly disabled via build config 00:02:34.901 00:02:34.901 libs: 00:02:34.901 argparse: explicitly disabled via build config 00:02:34.901 metrics: explicitly disabled via build config 00:02:34.901 acl: explicitly disabled via build config 00:02:34.901 bbdev: explicitly disabled via build config 00:02:34.901 bitratestats: explicitly disabled via build config 00:02:34.901 bpf: explicitly disabled via build config 00:02:34.901 cfgfile: explicitly disabled via build config 00:02:34.901 distributor: explicitly disabled via build config 00:02:34.901 efd: explicitly disabled via build config 00:02:34.901 eventdev: explicitly disabled via build config 00:02:34.901 dispatcher: explicitly disabled via build config 00:02:34.901 gpudev: explicitly disabled via build config 00:02:34.901 gro: explicitly disabled via build config 00:02:34.901 gso: explicitly disabled via build config 00:02:34.901 ip_frag: explicitly disabled via build config 00:02:34.901 jobstats: explicitly disabled via build config 00:02:34.901 latencystats: explicitly disabled via build config 00:02:34.901 lpm: explicitly disabled via build config 00:02:34.901 member: explicitly disabled via build config 00:02:34.901 pcapng: explicitly disabled via build config 00:02:34.901 rawdev: explicitly disabled via build config 00:02:34.901 regexdev: explicitly disabled via build config 00:02:34.901 mldev: explicitly disabled via build config 00:02:34.901 rib: explicitly disabled via build config 00:02:34.901 sched: explicitly disabled via build config 00:02:34.901 stack: explicitly disabled via build config 00:02:34.901 ipsec: explicitly disabled via build config 00:02:34.901 pdcp: explicitly disabled via build config 00:02:34.901 fib: explicitly disabled via build config 00:02:34.901 port: explicitly disabled via build config 00:02:34.901 pdump: explicitly disabled via build config 00:02:34.901 table: explicitly disabled via build config 00:02:34.901 pipeline: explicitly disabled via build config 00:02:34.901 graph: explicitly disabled via build config 00:02:34.901 node: explicitly disabled via build config 00:02:34.901 00:02:34.901 drivers: 00:02:34.901 common/cpt: not in enabled drivers build config 00:02:34.901 common/dpaax: not in enabled drivers build config 00:02:34.901 common/iavf: not in enabled drivers build config 00:02:34.901 common/idpf: not in enabled drivers build config 00:02:34.901 common/ionic: not in enabled drivers build config 00:02:34.901 common/mvep: not in enabled drivers build config 00:02:34.901 common/octeontx: not in enabled drivers build config 00:02:34.901 bus/auxiliary: not in enabled drivers build config 00:02:34.901 bus/cdx: not in enabled drivers build config 00:02:34.901 bus/dpaa: not in enabled drivers build config 00:02:34.901 bus/fslmc: not in enabled drivers build config 00:02:34.901 bus/ifpga: not in enabled drivers build config 00:02:34.902 bus/platform: not in enabled drivers build config 00:02:34.902 bus/uacce: not in enabled drivers build config 00:02:34.902 bus/vmbus: not in enabled drivers build config 00:02:34.902 common/cnxk: not in enabled drivers build config 00:02:34.902 common/mlx5: not in enabled drivers build config 00:02:34.902 common/nfp: not in enabled drivers build config 00:02:34.902 common/nitrox: not in enabled drivers build config 00:02:34.902 common/qat: not in enabled drivers build config 00:02:34.902 common/sfc_efx: not in enabled drivers build config 00:02:34.902 mempool/bucket: not in enabled drivers build config 00:02:34.902 mempool/cnxk: not in enabled drivers build config 00:02:34.902 mempool/dpaa: not in enabled drivers build config 00:02:34.902 mempool/dpaa2: not in enabled drivers build config 00:02:34.902 mempool/octeontx: not in enabled drivers build config 00:02:34.902 mempool/stack: not in enabled drivers build config 00:02:34.902 dma/cnxk: not in enabled drivers build config 00:02:34.902 dma/dpaa: not in enabled drivers build config 00:02:34.902 dma/dpaa2: not in enabled drivers build config 00:02:34.902 dma/hisilicon: not in enabled drivers build config 00:02:34.902 dma/idxd: not in enabled drivers build config 00:02:34.902 dma/ioat: not in enabled drivers build config 00:02:34.902 dma/skeleton: not in enabled drivers build config 00:02:34.902 net/af_packet: not in enabled drivers build config 00:02:34.902 net/af_xdp: not in enabled drivers build config 00:02:34.902 net/ark: not in enabled drivers build config 00:02:34.902 net/atlantic: not in enabled drivers build config 00:02:34.902 net/avp: not in enabled drivers build config 00:02:34.902 net/axgbe: not in enabled drivers build config 00:02:34.902 net/bnx2x: not in enabled drivers build config 00:02:34.902 net/bnxt: not in enabled drivers build config 00:02:34.902 net/bonding: not in enabled drivers build config 00:02:34.902 net/cnxk: not in enabled drivers build config 00:02:34.902 net/cpfl: not in enabled drivers build config 00:02:34.902 net/cxgbe: not in enabled drivers build config 00:02:34.902 net/dpaa: not in enabled drivers build config 00:02:34.902 net/dpaa2: not in enabled drivers build config 00:02:34.902 net/e1000: not in enabled drivers build config 00:02:34.902 net/ena: not in enabled drivers build config 00:02:34.902 net/enetc: not in enabled drivers build config 00:02:34.902 net/enetfec: not in enabled drivers build config 00:02:34.902 net/enic: not in enabled drivers build config 00:02:34.902 net/failsafe: not in enabled drivers build config 00:02:34.902 net/fm10k: not in enabled drivers build config 00:02:34.902 net/gve: not in enabled drivers build config 00:02:34.902 net/hinic: not in enabled drivers build config 00:02:34.902 net/hns3: not in enabled drivers build config 00:02:34.902 net/i40e: not in enabled drivers build config 00:02:34.902 net/iavf: not in enabled drivers build config 00:02:34.902 net/ice: not in enabled drivers build config 00:02:34.902 net/idpf: not in enabled drivers build config 00:02:34.902 net/igc: not in enabled drivers build config 00:02:34.902 net/ionic: not in enabled drivers build config 00:02:34.902 net/ipn3ke: not in enabled drivers build config 00:02:34.902 net/ixgbe: not in enabled drivers build config 00:02:34.902 net/mana: not in enabled drivers build config 00:02:34.902 net/memif: not in enabled drivers build config 00:02:34.902 net/mlx4: not in enabled drivers build config 00:02:34.902 net/mlx5: not in enabled drivers build config 00:02:34.902 net/mvneta: not in enabled drivers build config 00:02:34.902 net/mvpp2: not in enabled drivers build config 00:02:34.902 net/netvsc: not in enabled drivers build config 00:02:34.902 net/nfb: not in enabled drivers build config 00:02:34.902 net/nfp: not in enabled drivers build config 00:02:34.902 net/ngbe: not in enabled drivers build config 00:02:34.902 net/null: not in enabled drivers build config 00:02:34.902 net/octeontx: not in enabled drivers build config 00:02:34.902 net/octeon_ep: not in enabled drivers build config 00:02:34.902 net/pcap: not in enabled drivers build config 00:02:34.902 net/pfe: not in enabled drivers build config 00:02:34.902 net/qede: not in enabled drivers build config 00:02:34.902 net/ring: not in enabled drivers build config 00:02:34.902 net/sfc: not in enabled drivers build config 00:02:34.902 net/softnic: not in enabled drivers build config 00:02:34.902 net/tap: not in enabled drivers build config 00:02:34.902 net/thunderx: not in enabled drivers build config 00:02:34.902 net/txgbe: not in enabled drivers build config 00:02:34.902 net/vdev_netvsc: not in enabled drivers build config 00:02:34.902 net/vhost: not in enabled drivers build config 00:02:34.902 net/virtio: not in enabled drivers build config 00:02:34.902 net/vmxnet3: not in enabled drivers build config 00:02:34.902 raw/*: missing internal dependency, "rawdev" 00:02:34.902 crypto/armv8: not in enabled drivers build config 00:02:34.902 crypto/bcmfs: not in enabled drivers build config 00:02:34.902 crypto/caam_jr: not in enabled drivers build config 00:02:34.902 crypto/ccp: not in enabled drivers build config 00:02:34.902 crypto/cnxk: not in enabled drivers build config 00:02:34.902 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.902 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.902 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.902 crypto/mlx5: not in enabled drivers build config 00:02:34.902 crypto/mvsam: not in enabled drivers build config 00:02:34.902 crypto/nitrox: not in enabled drivers build config 00:02:34.902 crypto/null: not in enabled drivers build config 00:02:34.902 crypto/octeontx: not in enabled drivers build config 00:02:34.902 crypto/openssl: not in enabled drivers build config 00:02:34.902 crypto/scheduler: not in enabled drivers build config 00:02:34.902 crypto/uadk: not in enabled drivers build config 00:02:34.902 crypto/virtio: not in enabled drivers build config 00:02:34.902 compress/isal: not in enabled drivers build config 00:02:34.902 compress/mlx5: not in enabled drivers build config 00:02:34.902 compress/nitrox: not in enabled drivers build config 00:02:34.902 compress/octeontx: not in enabled drivers build config 00:02:34.902 compress/zlib: not in enabled drivers build config 00:02:34.902 regex/*: missing internal dependency, "regexdev" 00:02:34.902 ml/*: missing internal dependency, "mldev" 00:02:34.902 vdpa/ifc: not in enabled drivers build config 00:02:34.902 vdpa/mlx5: not in enabled drivers build config 00:02:34.902 vdpa/nfp: not in enabled drivers build config 00:02:34.902 vdpa/sfc: not in enabled drivers build config 00:02:34.902 event/*: missing internal dependency, "eventdev" 00:02:34.902 baseband/*: missing internal dependency, "bbdev" 00:02:34.902 gpu/*: missing internal dependency, "gpudev" 00:02:34.902 00:02:34.902 00:02:34.902 Build targets in project: 85 00:02:34.902 00:02:34.902 DPDK 24.03.0 00:02:34.902 00:02:34.902 User defined options 00:02:34.902 buildtype : debug 00:02:34.902 default_library : shared 00:02:34.902 libdir : lib 00:02:34.902 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:34.902 b_sanitize : address 00:02:34.902 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:34.902 c_link_args : 00:02:34.902 cpu_instruction_set: native 00:02:34.902 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:34.902 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:34.902 enable_docs : false 00:02:34.902 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:34.902 enable_kmods : false 00:02:34.902 max_lcores : 128 00:02:34.902 tests : false 00:02:34.902 00:02:34.902 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.902 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:34.902 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.902 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.902 [3/268] Linking static target lib/librte_kvargs.a 00:02:34.902 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.902 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.902 [6/268] Linking static target lib/librte_log.a 00:02:34.902 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.902 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.902 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:34.902 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.902 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.902 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:34.902 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.902 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.902 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.161 [16/268] Linking static target lib/librte_telemetry.a 00:02:35.161 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.161 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.420 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.420 [20/268] Linking target lib/librte_log.so.24.1 00:02:35.420 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.420 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.420 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.679 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.679 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:35.679 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.679 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.679 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.679 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.679 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:35.679 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.679 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.937 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.937 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:35.937 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:35.937 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:35.937 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.196 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.196 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.196 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:36.196 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.197 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.197 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.197 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.455 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.455 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.455 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.455 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.713 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.714 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.714 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.714 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:36.714 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:36.714 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.972 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.972 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.972 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:36.972 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.231 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.231 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.231 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.231 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.231 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.231 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.489 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.489 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.489 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.748 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.748 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.748 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.748 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.748 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.007 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.007 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.007 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.007 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.007 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.007 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.007 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.007 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.266 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.266 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.266 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.266 [84/268] Linking static target lib/librte_ring.a 00:02:38.266 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.525 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.525 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.525 [88/268] Linking static target lib/librte_eal.a 00:02:38.525 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.525 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.783 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.783 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.783 [93/268] Linking static target lib/librte_rcu.a 00:02:38.783 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.783 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.784 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.784 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.784 [98/268] Linking static target lib/librte_mempool.a 00:02:38.784 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.784 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.042 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.301 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.301 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.301 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.301 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.301 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.301 [107/268] Linking static target lib/librte_meter.a 00:02:39.581 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.582 [109/268] Linking static target lib/librte_net.a 00:02:39.582 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:39.582 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:39.855 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.855 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.855 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:39.855 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:39.855 [116/268] Linking static target lib/librte_mbuf.a 00:02:40.115 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.116 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.116 [119/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.378 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.378 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.378 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.637 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.896 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:40.896 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:40.896 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:40.896 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:40.896 [128/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.896 [129/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.896 [130/268] Linking static target lib/librte_pci.a 00:02:40.896 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:41.154 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.154 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:41.154 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:41.154 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:41.154 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.154 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:41.154 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:41.154 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.154 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.154 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.413 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.413 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.413 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:41.413 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:41.413 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.413 [147/268] Linking static target lib/librte_cmdline.a 00:02:41.413 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.672 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:41.672 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:41.931 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:41.931 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:41.931 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:41.931 [154/268] Linking static target lib/librte_timer.a 00:02:41.931 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.931 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:41.931 [157/268] Linking static target lib/librte_ethdev.a 00:02:42.190 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:42.190 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.449 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:42.449 [161/268] Linking static target lib/librte_hash.a 00:02:42.449 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:42.449 [163/268] Linking static target lib/librte_compressdev.a 00:02:42.449 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:42.449 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:42.449 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.709 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:42.709 [168/268] Linking static target lib/librte_dmadev.a 00:02:42.709 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:42.969 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.969 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:42.969 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.969 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.228 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:43.228 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.228 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:43.487 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:43.487 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:43.487 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.487 [180/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.487 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:43.487 [182/268] Linking static target lib/librte_cryptodev.a 00:02:43.487 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.487 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.054 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.054 [186/268] Linking static target lib/librte_reorder.a 00:02:44.054 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.054 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.054 [189/268] Linking static target lib/librte_power.a 00:02:44.054 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.313 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.313 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.313 [193/268] Linking static target lib/librte_security.a 00:02:44.313 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:44.572 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.831 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.831 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.091 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.091 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:45.091 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:45.091 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.350 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.350 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:45.350 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:45.350 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.610 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.610 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.610 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.869 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:45.869 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.869 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.869 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.869 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:46.127 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:46.128 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:46.128 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:46.128 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:46.128 [218/268] Linking static target drivers/librte_bus_pci.a 00:02:46.128 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:46.128 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:46.128 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:46.386 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:46.386 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.387 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.387 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:46.387 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.645 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.212 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.501 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.761 [230/268] Linking target lib/librte_eal.so.24.1 00:02:50.761 [231/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:50.761 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:50.761 [233/268] Linking static target lib/librte_vhost.a 00:02:50.761 [234/268] Linking target lib/librte_meter.so.24.1 00:02:50.761 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:50.761 [236/268] Linking target lib/librte_pci.so.24.1 00:02:50.761 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:51.019 [238/268] Linking target lib/librte_ring.so.24.1 00:02:51.019 [239/268] Linking target lib/librte_timer.so.24.1 00:02:51.019 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:51.019 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:51.019 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:51.019 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:51.019 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:51.019 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:51.019 [246/268] Linking target lib/librte_rcu.so.24.1 00:02:51.019 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:51.020 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.278 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:51.278 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:51.278 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.278 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:51.537 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.537 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:51.537 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:51.537 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:51.537 [257/268] Linking target lib/librte_net.so.24.1 00:02:51.537 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:51.537 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:51.796 [260/268] Linking target lib/librte_security.so.24.1 00:02:51.796 [261/268] Linking target lib/librte_hash.so.24.1 00:02:51.796 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:51.796 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:51.796 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:51.796 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:51.796 [266/268] Linking target lib/librte_power.so.24.1 00:02:52.733 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.991 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:52.991 INFO: autodetecting backend as ninja 00:02:52.991 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.111 CC lib/log/log_flags.o 00:03:11.111 CC lib/log/log_deprecated.o 00:03:11.111 CC lib/log/log.o 00:03:11.111 CC lib/ut/ut.o 00:03:11.111 CC lib/ut_mock/mock.o 00:03:11.111 LIB libspdk_ut_mock.a 00:03:11.111 LIB libspdk_ut.a 00:03:11.111 LIB libspdk_log.a 00:03:11.111 SO libspdk_ut_mock.so.6.0 00:03:11.111 SO libspdk_ut.so.2.0 00:03:11.111 SO libspdk_log.so.7.1 00:03:11.111 SYMLINK libspdk_ut_mock.so 00:03:11.111 SYMLINK libspdk_ut.so 00:03:11.111 SYMLINK libspdk_log.so 00:03:11.111 CXX lib/trace_parser/trace.o 00:03:11.111 CC lib/dma/dma.o 00:03:11.111 CC lib/ioat/ioat.o 00:03:11.111 CC lib/util/cpuset.o 00:03:11.111 CC lib/util/base64.o 00:03:11.111 CC lib/util/crc16.o 00:03:11.111 CC lib/util/bit_array.o 00:03:11.111 CC lib/util/crc32.o 00:03:11.111 CC lib/util/crc32c.o 00:03:11.111 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.111 CC lib/util/crc32_ieee.o 00:03:11.111 CC lib/util/crc64.o 00:03:11.111 CC lib/util/dif.o 00:03:11.111 CC lib/vfio_user/host/vfio_user.o 00:03:11.111 CC lib/util/fd.o 00:03:11.111 LIB libspdk_dma.a 00:03:11.111 CC lib/util/fd_group.o 00:03:11.111 SO libspdk_dma.so.5.0 00:03:11.111 CC lib/util/file.o 00:03:11.111 CC lib/util/hexlify.o 00:03:11.111 SYMLINK libspdk_dma.so 00:03:11.111 CC lib/util/iov.o 00:03:11.111 LIB libspdk_ioat.a 00:03:11.111 SO libspdk_ioat.so.7.0 00:03:11.111 CC lib/util/math.o 00:03:11.111 SYMLINK libspdk_ioat.so 00:03:11.111 CC lib/util/net.o 00:03:11.111 LIB libspdk_vfio_user.a 00:03:11.111 CC lib/util/pipe.o 00:03:11.111 SO libspdk_vfio_user.so.5.0 00:03:11.111 CC lib/util/strerror_tls.o 00:03:11.111 CC lib/util/string.o 00:03:11.111 SYMLINK libspdk_vfio_user.so 00:03:11.111 CC lib/util/uuid.o 00:03:11.111 CC lib/util/xor.o 00:03:11.111 CC lib/util/zipf.o 00:03:11.111 CC lib/util/md5.o 00:03:11.111 LIB libspdk_util.a 00:03:11.111 SO libspdk_util.so.10.1 00:03:11.111 LIB libspdk_trace_parser.a 00:03:11.111 SO libspdk_trace_parser.so.6.0 00:03:11.111 SYMLINK libspdk_util.so 00:03:11.111 SYMLINK libspdk_trace_parser.so 00:03:11.111 CC lib/env_dpdk/env.o 00:03:11.111 CC lib/env_dpdk/memory.o 00:03:11.111 CC lib/env_dpdk/pci.o 00:03:11.111 CC lib/env_dpdk/threads.o 00:03:11.111 CC lib/env_dpdk/init.o 00:03:11.111 CC lib/rdma_utils/rdma_utils.o 00:03:11.111 CC lib/conf/conf.o 00:03:11.111 CC lib/json/json_parse.o 00:03:11.111 CC lib/idxd/idxd.o 00:03:11.111 CC lib/vmd/vmd.o 00:03:11.111 CC lib/env_dpdk/pci_ioat.o 00:03:11.111 LIB libspdk_conf.a 00:03:11.111 CC lib/json/json_util.o 00:03:11.111 SO libspdk_conf.so.6.0 00:03:11.111 CC lib/json/json_write.o 00:03:11.111 LIB libspdk_rdma_utils.a 00:03:11.370 SYMLINK libspdk_conf.so 00:03:11.370 SO libspdk_rdma_utils.so.1.0 00:03:11.370 CC lib/idxd/idxd_user.o 00:03:11.370 SYMLINK libspdk_rdma_utils.so 00:03:11.370 CC lib/vmd/led.o 00:03:11.370 CC lib/env_dpdk/pci_virtio.o 00:03:11.370 CC lib/idxd/idxd_kernel.o 00:03:11.370 CC lib/env_dpdk/pci_vmd.o 00:03:11.370 CC lib/rdma_provider/common.o 00:03:11.370 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:11.629 LIB libspdk_json.a 00:03:11.629 CC lib/env_dpdk/pci_idxd.o 00:03:11.629 SO libspdk_json.so.6.0 00:03:11.629 CC lib/env_dpdk/pci_event.o 00:03:11.629 CC lib/env_dpdk/sigbus_handler.o 00:03:11.629 SYMLINK libspdk_json.so 00:03:11.629 CC lib/env_dpdk/pci_dpdk.o 00:03:11.629 LIB libspdk_idxd.a 00:03:11.629 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:11.629 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:11.629 SO libspdk_idxd.so.12.1 00:03:11.629 LIB libspdk_vmd.a 00:03:11.629 LIB libspdk_rdma_provider.a 00:03:11.629 SO libspdk_vmd.so.6.0 00:03:11.629 SO libspdk_rdma_provider.so.7.0 00:03:11.629 SYMLINK libspdk_idxd.so 00:03:11.889 CC lib/jsonrpc/jsonrpc_server.o 00:03:11.889 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:11.889 CC lib/jsonrpc/jsonrpc_client.o 00:03:11.889 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:11.889 SYMLINK libspdk_rdma_provider.so 00:03:11.889 SYMLINK libspdk_vmd.so 00:03:12.148 LIB libspdk_jsonrpc.a 00:03:12.148 SO libspdk_jsonrpc.so.6.0 00:03:12.148 SYMLINK libspdk_jsonrpc.so 00:03:12.716 LIB libspdk_env_dpdk.a 00:03:12.716 CC lib/rpc/rpc.o 00:03:12.716 SO libspdk_env_dpdk.so.15.1 00:03:12.716 SYMLINK libspdk_env_dpdk.so 00:03:12.975 LIB libspdk_rpc.a 00:03:12.975 SO libspdk_rpc.so.6.0 00:03:12.975 SYMLINK libspdk_rpc.so 00:03:13.234 CC lib/trace/trace_flags.o 00:03:13.234 CC lib/trace/trace.o 00:03:13.234 CC lib/trace/trace_rpc.o 00:03:13.234 CC lib/notify/notify_rpc.o 00:03:13.234 CC lib/notify/notify.o 00:03:13.234 CC lib/keyring/keyring.o 00:03:13.234 CC lib/keyring/keyring_rpc.o 00:03:13.493 LIB libspdk_notify.a 00:03:13.493 SO libspdk_notify.so.6.0 00:03:13.493 LIB libspdk_keyring.a 00:03:13.493 LIB libspdk_trace.a 00:03:13.752 SYMLINK libspdk_notify.so 00:03:13.752 SO libspdk_keyring.so.2.0 00:03:13.752 SO libspdk_trace.so.11.0 00:03:13.752 SYMLINK libspdk_keyring.so 00:03:13.752 SYMLINK libspdk_trace.so 00:03:14.012 CC lib/thread/iobuf.o 00:03:14.012 CC lib/thread/thread.o 00:03:14.012 CC lib/sock/sock.o 00:03:14.012 CC lib/sock/sock_rpc.o 00:03:14.580 LIB libspdk_sock.a 00:03:14.580 SO libspdk_sock.so.10.0 00:03:14.580 SYMLINK libspdk_sock.so 00:03:15.150 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:15.150 CC lib/nvme/nvme_ctrlr.o 00:03:15.150 CC lib/nvme/nvme_fabric.o 00:03:15.150 CC lib/nvme/nvme_ns_cmd.o 00:03:15.150 CC lib/nvme/nvme_ns.o 00:03:15.150 CC lib/nvme/nvme_pcie_common.o 00:03:15.150 CC lib/nvme/nvme_qpair.o 00:03:15.150 CC lib/nvme/nvme_pcie.o 00:03:15.150 CC lib/nvme/nvme.o 00:03:15.719 LIB libspdk_thread.a 00:03:15.719 CC lib/nvme/nvme_quirks.o 00:03:15.719 SO libspdk_thread.so.11.0 00:03:15.719 SYMLINK libspdk_thread.so 00:03:15.719 CC lib/nvme/nvme_transport.o 00:03:15.719 CC lib/nvme/nvme_discovery.o 00:03:15.978 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:15.978 CC lib/accel/accel.o 00:03:15.978 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:15.978 CC lib/blob/blobstore.o 00:03:15.978 CC lib/blob/request.o 00:03:16.238 CC lib/blob/zeroes.o 00:03:16.238 CC lib/blob/blob_bs_dev.o 00:03:16.238 CC lib/nvme/nvme_tcp.o 00:03:16.238 CC lib/nvme/nvme_opal.o 00:03:16.497 CC lib/nvme/nvme_io_msg.o 00:03:16.497 CC lib/nvme/nvme_poll_group.o 00:03:16.497 CC lib/nvme/nvme_zns.o 00:03:16.497 CC lib/nvme/nvme_stubs.o 00:03:16.497 CC lib/nvme/nvme_auth.o 00:03:17.066 CC lib/nvme/nvme_cuse.o 00:03:17.066 CC lib/accel/accel_rpc.o 00:03:17.066 CC lib/nvme/nvme_rdma.o 00:03:17.066 CC lib/init/json_config.o 00:03:17.066 CC lib/accel/accel_sw.o 00:03:17.066 CC lib/virtio/virtio.o 00:03:17.066 CC lib/virtio/virtio_vhost_user.o 00:03:17.326 CC lib/init/subsystem.o 00:03:17.326 CC lib/init/subsystem_rpc.o 00:03:17.326 CC lib/init/rpc.o 00:03:17.326 LIB libspdk_accel.a 00:03:17.585 CC lib/virtio/virtio_vfio_user.o 00:03:17.585 SO libspdk_accel.so.16.0 00:03:17.585 CC lib/virtio/virtio_pci.o 00:03:17.585 LIB libspdk_init.a 00:03:17.585 SYMLINK libspdk_accel.so 00:03:17.585 SO libspdk_init.so.6.0 00:03:17.585 CC lib/fsdev/fsdev.o 00:03:17.585 CC lib/fsdev/fsdev_io.o 00:03:17.585 SYMLINK libspdk_init.so 00:03:17.585 CC lib/fsdev/fsdev_rpc.o 00:03:17.844 CC lib/bdev/bdev.o 00:03:17.844 CC lib/bdev/bdev_rpc.o 00:03:17.844 CC lib/bdev/bdev_zone.o 00:03:17.844 CC lib/event/app.o 00:03:17.844 LIB libspdk_virtio.a 00:03:17.844 CC lib/event/reactor.o 00:03:17.844 SO libspdk_virtio.so.7.0 00:03:17.844 SYMLINK libspdk_virtio.so 00:03:17.844 CC lib/event/log_rpc.o 00:03:18.103 CC lib/bdev/part.o 00:03:18.104 CC lib/bdev/scsi_nvme.o 00:03:18.104 CC lib/event/app_rpc.o 00:03:18.104 CC lib/event/scheduler_static.o 00:03:18.376 LIB libspdk_fsdev.a 00:03:18.376 LIB libspdk_nvme.a 00:03:18.376 SO libspdk_fsdev.so.2.0 00:03:18.376 LIB libspdk_event.a 00:03:18.376 SYMLINK libspdk_fsdev.so 00:03:18.376 SO libspdk_event.so.14.0 00:03:18.661 SYMLINK libspdk_event.so 00:03:18.661 SO libspdk_nvme.so.15.0 00:03:18.661 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:18.920 SYMLINK libspdk_nvme.so 00:03:19.488 LIB libspdk_fuse_dispatcher.a 00:03:19.488 LIB libspdk_blob.a 00:03:19.488 SO libspdk_fuse_dispatcher.so.1.0 00:03:19.488 SO libspdk_blob.so.11.0 00:03:19.488 SYMLINK libspdk_fuse_dispatcher.so 00:03:19.747 SYMLINK libspdk_blob.so 00:03:20.006 CC lib/blobfs/blobfs.o 00:03:20.006 CC lib/blobfs/tree.o 00:03:20.006 CC lib/lvol/lvol.o 00:03:20.575 LIB libspdk_bdev.a 00:03:20.575 SO libspdk_bdev.so.17.0 00:03:20.834 SYMLINK libspdk_bdev.so 00:03:20.834 LIB libspdk_blobfs.a 00:03:20.834 SO libspdk_blobfs.so.10.0 00:03:21.093 CC lib/nbd/nbd_rpc.o 00:03:21.093 CC lib/nbd/nbd.o 00:03:21.093 CC lib/nvmf/ctrlr_bdev.o 00:03:21.093 CC lib/nvmf/ctrlr.o 00:03:21.093 CC lib/nvmf/ctrlr_discovery.o 00:03:21.093 CC lib/scsi/dev.o 00:03:21.093 CC lib/ublk/ublk.o 00:03:21.093 CC lib/ftl/ftl_core.o 00:03:21.093 SYMLINK libspdk_blobfs.so 00:03:21.093 CC lib/scsi/lun.o 00:03:21.093 LIB libspdk_lvol.a 00:03:21.093 SO libspdk_lvol.so.10.0 00:03:21.093 SYMLINK libspdk_lvol.so 00:03:21.093 CC lib/scsi/port.o 00:03:21.093 CC lib/scsi/scsi.o 00:03:21.352 CC lib/nvmf/subsystem.o 00:03:21.352 CC lib/nvmf/nvmf.o 00:03:21.352 CC lib/scsi/scsi_bdev.o 00:03:21.352 CC lib/ftl/ftl_init.o 00:03:21.352 CC lib/ublk/ublk_rpc.o 00:03:21.352 LIB libspdk_nbd.a 00:03:21.352 SO libspdk_nbd.so.7.0 00:03:21.611 CC lib/nvmf/nvmf_rpc.o 00:03:21.611 SYMLINK libspdk_nbd.so 00:03:21.611 CC lib/nvmf/transport.o 00:03:21.611 CC lib/ftl/ftl_layout.o 00:03:21.611 CC lib/scsi/scsi_pr.o 00:03:21.611 LIB libspdk_ublk.a 00:03:21.611 CC lib/nvmf/tcp.o 00:03:21.611 SO libspdk_ublk.so.3.0 00:03:21.871 SYMLINK libspdk_ublk.so 00:03:21.871 CC lib/nvmf/stubs.o 00:03:21.871 CC lib/scsi/scsi_rpc.o 00:03:21.871 CC lib/ftl/ftl_debug.o 00:03:21.871 CC lib/ftl/ftl_io.o 00:03:21.871 CC lib/scsi/task.o 00:03:22.130 CC lib/ftl/ftl_sb.o 00:03:22.130 CC lib/ftl/ftl_l2p.o 00:03:22.130 LIB libspdk_scsi.a 00:03:22.130 CC lib/nvmf/mdns_server.o 00:03:22.130 CC lib/nvmf/rdma.o 00:03:22.130 SO libspdk_scsi.so.9.0 00:03:22.389 CC lib/nvmf/auth.o 00:03:22.389 CC lib/ftl/ftl_l2p_flat.o 00:03:22.389 SYMLINK libspdk_scsi.so 00:03:22.389 CC lib/ftl/ftl_nv_cache.o 00:03:22.389 CC lib/ftl/ftl_band.o 00:03:22.389 CC lib/iscsi/conn.o 00:03:22.647 CC lib/iscsi/init_grp.o 00:03:22.647 CC lib/vhost/vhost.o 00:03:22.647 CC lib/iscsi/iscsi.o 00:03:22.906 CC lib/iscsi/param.o 00:03:22.906 CC lib/iscsi/portal_grp.o 00:03:22.906 CC lib/iscsi/tgt_node.o 00:03:23.165 CC lib/iscsi/iscsi_subsystem.o 00:03:23.165 CC lib/ftl/ftl_band_ops.o 00:03:23.165 CC lib/iscsi/iscsi_rpc.o 00:03:23.165 CC lib/iscsi/task.o 00:03:23.424 CC lib/vhost/vhost_rpc.o 00:03:23.424 CC lib/ftl/ftl_writer.o 00:03:23.424 CC lib/ftl/ftl_rq.o 00:03:23.424 CC lib/ftl/ftl_reloc.o 00:03:23.424 CC lib/ftl/ftl_l2p_cache.o 00:03:23.424 CC lib/ftl/ftl_p2l.o 00:03:23.682 CC lib/ftl/ftl_p2l_log.o 00:03:23.682 CC lib/vhost/vhost_scsi.o 00:03:23.682 CC lib/ftl/mngt/ftl_mngt.o 00:03:23.682 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:23.682 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:23.941 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:23.941 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:23.941 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.941 CC lib/vhost/vhost_blk.o 00:03:23.941 CC lib/vhost/rte_vhost_user.o 00:03:23.941 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:23.941 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.199 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.199 LIB libspdk_iscsi.a 00:03:24.199 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.199 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.199 SO libspdk_iscsi.so.8.0 00:03:24.199 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.199 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.457 SYMLINK libspdk_iscsi.so 00:03:24.457 CC lib/ftl/utils/ftl_conf.o 00:03:24.457 CC lib/ftl/utils/ftl_md.o 00:03:24.457 CC lib/ftl/utils/ftl_mempool.o 00:03:24.457 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.457 CC lib/ftl/utils/ftl_property.o 00:03:24.457 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.715 LIB libspdk_nvmf.a 00:03:24.715 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.715 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.715 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.715 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.715 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.715 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.715 SO libspdk_nvmf.so.20.0 00:03:24.715 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.715 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.974 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.974 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:24.974 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:24.974 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:24.974 CC lib/ftl/base/ftl_base_dev.o 00:03:24.974 SYMLINK libspdk_nvmf.so 00:03:24.974 CC lib/ftl/base/ftl_base_bdev.o 00:03:24.974 CC lib/ftl/ftl_trace.o 00:03:24.974 LIB libspdk_vhost.a 00:03:25.233 SO libspdk_vhost.so.8.0 00:03:25.233 SYMLINK libspdk_vhost.so 00:03:25.233 LIB libspdk_ftl.a 00:03:25.491 SO libspdk_ftl.so.9.0 00:03:26.058 SYMLINK libspdk_ftl.so 00:03:26.316 CC module/env_dpdk/env_dpdk_rpc.o 00:03:26.316 CC module/accel/ioat/accel_ioat.o 00:03:26.316 CC module/keyring/file/keyring.o 00:03:26.316 CC module/blob/bdev/blob_bdev.o 00:03:26.316 CC module/keyring/linux/keyring.o 00:03:26.316 CC module/accel/error/accel_error.o 00:03:26.316 CC module/accel/dsa/accel_dsa.o 00:03:26.316 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:26.316 CC module/sock/posix/posix.o 00:03:26.316 CC module/fsdev/aio/fsdev_aio.o 00:03:26.316 LIB libspdk_env_dpdk_rpc.a 00:03:26.574 SO libspdk_env_dpdk_rpc.so.6.0 00:03:26.574 SYMLINK libspdk_env_dpdk_rpc.so 00:03:26.574 CC module/keyring/linux/keyring_rpc.o 00:03:26.574 CC module/accel/dsa/accel_dsa_rpc.o 00:03:26.574 CC module/keyring/file/keyring_rpc.o 00:03:26.574 CC module/accel/ioat/accel_ioat_rpc.o 00:03:26.574 CC module/accel/error/accel_error_rpc.o 00:03:26.574 LIB libspdk_scheduler_dynamic.a 00:03:26.574 SO libspdk_scheduler_dynamic.so.4.0 00:03:26.574 LIB libspdk_keyring_linux.a 00:03:26.574 SYMLINK libspdk_scheduler_dynamic.so 00:03:26.574 LIB libspdk_accel_dsa.a 00:03:26.574 SO libspdk_keyring_linux.so.1.0 00:03:26.574 LIB libspdk_keyring_file.a 00:03:26.574 LIB libspdk_blob_bdev.a 00:03:26.833 LIB libspdk_accel_ioat.a 00:03:26.833 SO libspdk_accel_dsa.so.5.0 00:03:26.833 SO libspdk_keyring_file.so.2.0 00:03:26.833 SO libspdk_blob_bdev.so.11.0 00:03:26.833 LIB libspdk_accel_error.a 00:03:26.833 SO libspdk_accel_ioat.so.6.0 00:03:26.833 SYMLINK libspdk_keyring_linux.so 00:03:26.833 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:26.833 SO libspdk_accel_error.so.2.0 00:03:26.833 SYMLINK libspdk_blob_bdev.so 00:03:26.833 SYMLINK libspdk_accel_dsa.so 00:03:26.833 SYMLINK libspdk_keyring_file.so 00:03:26.833 CC module/fsdev/aio/linux_aio_mgr.o 00:03:26.833 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:26.833 SYMLINK libspdk_accel_ioat.so 00:03:26.833 SYMLINK libspdk_accel_error.so 00:03:26.833 CC module/accel/iaa/accel_iaa.o 00:03:26.833 CC module/accel/iaa/accel_iaa_rpc.o 00:03:26.833 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.093 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.093 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.093 LIB libspdk_accel_iaa.a 00:03:27.093 CC module/bdev/delay/vbdev_delay.o 00:03:27.093 CC module/blobfs/bdev/blobfs_bdev.o 00:03:27.093 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:27.093 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:27.093 CC module/bdev/error/vbdev_error.o 00:03:27.093 SO libspdk_accel_iaa.so.3.0 00:03:27.093 LIB libspdk_scheduler_gscheduler.a 00:03:27.093 CC module/bdev/gpt/gpt.o 00:03:27.093 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.093 CC module/bdev/lvol/vbdev_lvol.o 00:03:27.093 LIB libspdk_fsdev_aio.a 00:03:27.093 SYMLINK libspdk_accel_iaa.so 00:03:27.093 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:27.093 LIB libspdk_sock_posix.a 00:03:27.093 SO libspdk_fsdev_aio.so.1.0 00:03:27.093 SYMLINK libspdk_scheduler_gscheduler.so 00:03:27.093 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:27.093 SO libspdk_sock_posix.so.6.0 00:03:27.352 CC module/bdev/gpt/vbdev_gpt.o 00:03:27.352 LIB libspdk_blobfs_bdev.a 00:03:27.352 SO libspdk_blobfs_bdev.so.6.0 00:03:27.352 SYMLINK libspdk_fsdev_aio.so 00:03:27.352 SYMLINK libspdk_sock_posix.so 00:03:27.352 CC module/bdev/error/vbdev_error_rpc.o 00:03:27.352 SYMLINK libspdk_blobfs_bdev.so 00:03:27.352 LIB libspdk_bdev_delay.a 00:03:27.352 SO libspdk_bdev_delay.so.6.0 00:03:27.610 CC module/bdev/malloc/bdev_malloc.o 00:03:27.610 CC module/bdev/null/bdev_null.o 00:03:27.610 CC module/bdev/nvme/bdev_nvme.o 00:03:27.610 LIB libspdk_bdev_error.a 00:03:27.610 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:27.610 SYMLINK libspdk_bdev_delay.so 00:03:27.610 LIB libspdk_bdev_gpt.a 00:03:27.610 CC module/bdev/raid/bdev_raid.o 00:03:27.610 CC module/bdev/passthru/vbdev_passthru.o 00:03:27.610 SO libspdk_bdev_error.so.6.0 00:03:27.610 SO libspdk_bdev_gpt.so.6.0 00:03:27.610 SYMLINK libspdk_bdev_error.so 00:03:27.610 CC module/bdev/raid/bdev_raid_rpc.o 00:03:27.610 SYMLINK libspdk_bdev_gpt.so 00:03:27.610 CC module/bdev/raid/bdev_raid_sb.o 00:03:27.610 CC module/bdev/split/vbdev_split.o 00:03:27.610 CC module/bdev/split/vbdev_split_rpc.o 00:03:27.610 LIB libspdk_bdev_lvol.a 00:03:27.939 SO libspdk_bdev_lvol.so.6.0 00:03:27.939 CC module/bdev/null/bdev_null_rpc.o 00:03:27.939 SYMLINK libspdk_bdev_lvol.so 00:03:27.939 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:27.939 LIB libspdk_bdev_malloc.a 00:03:27.939 LIB libspdk_bdev_split.a 00:03:27.939 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:27.939 SO libspdk_bdev_malloc.so.6.0 00:03:27.939 SO libspdk_bdev_split.so.6.0 00:03:27.940 LIB libspdk_bdev_null.a 00:03:27.940 SO libspdk_bdev_null.so.6.0 00:03:27.940 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.229 SYMLINK libspdk_bdev_malloc.so 00:03:28.229 LIB libspdk_bdev_passthru.a 00:03:28.229 CC module/bdev/aio/bdev_aio.o 00:03:28.229 SYMLINK libspdk_bdev_split.so 00:03:28.229 CC module/bdev/xnvme/bdev_xnvme.o 00:03:28.229 SO libspdk_bdev_passthru.so.6.0 00:03:28.229 SYMLINK libspdk_bdev_null.so 00:03:28.229 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:28.229 SYMLINK libspdk_bdev_passthru.so 00:03:28.229 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.229 CC module/bdev/ftl/bdev_ftl.o 00:03:28.229 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.229 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.229 LIB libspdk_bdev_zone_block.a 00:03:28.489 LIB libspdk_bdev_xnvme.a 00:03:28.489 SO libspdk_bdev_zone_block.so.6.0 00:03:28.489 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.489 SO libspdk_bdev_xnvme.so.3.0 00:03:28.489 CC module/bdev/raid/raid0.o 00:03:28.489 SYMLINK libspdk_bdev_zone_block.so 00:03:28.489 CC module/bdev/raid/raid1.o 00:03:28.489 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.489 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.489 SYMLINK libspdk_bdev_xnvme.so 00:03:28.489 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.489 LIB libspdk_bdev_aio.a 00:03:28.489 LIB libspdk_bdev_iscsi.a 00:03:28.489 SO libspdk_bdev_aio.so.6.0 00:03:28.489 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.489 SO libspdk_bdev_iscsi.so.6.0 00:03:28.749 SYMLINK libspdk_bdev_aio.so 00:03:28.749 CC module/bdev/nvme/nvme_rpc.o 00:03:28.749 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.749 SYMLINK libspdk_bdev_iscsi.so 00:03:28.749 CC module/bdev/nvme/vbdev_opal.o 00:03:28.749 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.749 LIB libspdk_bdev_ftl.a 00:03:28.749 SO libspdk_bdev_ftl.so.6.0 00:03:28.749 CC module/bdev/raid/concat.o 00:03:28.749 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.749 SYMLINK libspdk_bdev_ftl.so 00:03:29.008 LIB libspdk_bdev_raid.a 00:03:29.008 LIB libspdk_bdev_virtio.a 00:03:29.008 SO libspdk_bdev_virtio.so.6.0 00:03:29.008 SO libspdk_bdev_raid.so.6.0 00:03:29.267 SYMLINK libspdk_bdev_virtio.so 00:03:29.267 SYMLINK libspdk_bdev_raid.so 00:03:30.644 LIB libspdk_bdev_nvme.a 00:03:30.644 SO libspdk_bdev_nvme.so.7.1 00:03:30.644 SYMLINK libspdk_bdev_nvme.so 00:03:31.212 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.213 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.213 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.213 CC module/event/subsystems/keyring/keyring.o 00:03:31.213 CC module/event/subsystems/vmd/vmd.o 00:03:31.213 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.213 CC module/event/subsystems/sock/sock.o 00:03:31.213 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.213 CC module/event/subsystems/fsdev/fsdev.o 00:03:31.213 LIB libspdk_event_keyring.a 00:03:31.213 LIB libspdk_event_scheduler.a 00:03:31.213 LIB libspdk_event_sock.a 00:03:31.472 LIB libspdk_event_vmd.a 00:03:31.472 LIB libspdk_event_vhost_blk.a 00:03:31.472 LIB libspdk_event_iobuf.a 00:03:31.472 SO libspdk_event_keyring.so.1.0 00:03:31.472 LIB libspdk_event_fsdev.a 00:03:31.472 SO libspdk_event_scheduler.so.4.0 00:03:31.472 SO libspdk_event_sock.so.5.0 00:03:31.472 SO libspdk_event_vhost_blk.so.3.0 00:03:31.472 SO libspdk_event_vmd.so.6.0 00:03:31.472 SO libspdk_event_iobuf.so.3.0 00:03:31.472 SO libspdk_event_fsdev.so.1.0 00:03:31.472 SYMLINK libspdk_event_keyring.so 00:03:31.472 SYMLINK libspdk_event_vhost_blk.so 00:03:31.472 SYMLINK libspdk_event_sock.so 00:03:31.472 SYMLINK libspdk_event_vmd.so 00:03:31.472 SYMLINK libspdk_event_scheduler.so 00:03:31.472 SYMLINK libspdk_event_fsdev.so 00:03:31.472 SYMLINK libspdk_event_iobuf.so 00:03:31.731 CC module/event/subsystems/accel/accel.o 00:03:31.991 LIB libspdk_event_accel.a 00:03:31.991 SO libspdk_event_accel.so.6.0 00:03:31.991 SYMLINK libspdk_event_accel.so 00:03:32.559 CC module/event/subsystems/bdev/bdev.o 00:03:32.559 LIB libspdk_event_bdev.a 00:03:32.559 SO libspdk_event_bdev.so.6.0 00:03:32.818 SYMLINK libspdk_event_bdev.so 00:03:33.078 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.078 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.078 CC module/event/subsystems/scsi/scsi.o 00:03:33.078 CC module/event/subsystems/nbd/nbd.o 00:03:33.078 CC module/event/subsystems/ublk/ublk.o 00:03:33.337 LIB libspdk_event_ublk.a 00:03:33.337 LIB libspdk_event_nbd.a 00:03:33.337 LIB libspdk_event_scsi.a 00:03:33.337 SO libspdk_event_ublk.so.3.0 00:03:33.337 SO libspdk_event_nbd.so.6.0 00:03:33.337 SO libspdk_event_scsi.so.6.0 00:03:33.337 LIB libspdk_event_nvmf.a 00:03:33.337 SYMLINK libspdk_event_nbd.so 00:03:33.337 SO libspdk_event_nvmf.so.6.0 00:03:33.337 SYMLINK libspdk_event_ublk.so 00:03:33.337 SYMLINK libspdk_event_scsi.so 00:03:33.337 SYMLINK libspdk_event_nvmf.so 00:03:33.596 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.596 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.854 LIB libspdk_event_vhost_scsi.a 00:03:33.854 LIB libspdk_event_iscsi.a 00:03:33.854 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.854 SO libspdk_event_iscsi.so.6.0 00:03:33.854 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.113 SYMLINK libspdk_event_iscsi.so 00:03:34.113 SO libspdk.so.6.0 00:03:34.113 SYMLINK libspdk.so 00:03:34.371 CC app/trace_record/trace_record.o 00:03:34.371 CXX app/trace/trace.o 00:03:34.630 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.630 CC app/nvmf_tgt/nvmf_main.o 00:03:34.630 CC app/iscsi_tgt/iscsi_tgt.o 00:03:34.630 CC examples/util/zipf/zipf.o 00:03:34.630 CC examples/ioat/perf/perf.o 00:03:34.630 CC test/thread/poller_perf/poller_perf.o 00:03:34.630 CC test/dma/test_dma/test_dma.o 00:03:34.630 CC test/app/bdev_svc/bdev_svc.o 00:03:34.630 LINK interrupt_tgt 00:03:34.630 LINK zipf 00:03:34.630 LINK nvmf_tgt 00:03:34.630 LINK iscsi_tgt 00:03:34.630 LINK poller_perf 00:03:34.630 LINK spdk_trace_record 00:03:34.888 LINK bdev_svc 00:03:34.888 LINK ioat_perf 00:03:34.888 LINK spdk_trace 00:03:34.888 CC test/app/histogram_perf/histogram_perf.o 00:03:34.888 CC app/spdk_tgt/spdk_tgt.o 00:03:34.888 CC test/app/jsoncat/jsoncat.o 00:03:34.888 CC examples/ioat/verify/verify.o 00:03:34.888 CC test/app/stub/stub.o 00:03:35.148 CC examples/thread/thread/thread_ex.o 00:03:35.148 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:35.148 LINK test_dma 00:03:35.148 CC app/spdk_lspci/spdk_lspci.o 00:03:35.148 LINK histogram_perf 00:03:35.148 CC examples/sock/hello_world/hello_sock.o 00:03:35.148 LINK jsoncat 00:03:35.148 LINK stub 00:03:35.148 LINK spdk_tgt 00:03:35.148 LINK verify 00:03:35.148 LINK spdk_lspci 00:03:35.416 LINK thread 00:03:35.416 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:35.416 TEST_HEADER include/spdk/accel.h 00:03:35.416 TEST_HEADER include/spdk/accel_module.h 00:03:35.416 TEST_HEADER include/spdk/assert.h 00:03:35.416 TEST_HEADER include/spdk/barrier.h 00:03:35.416 CC app/spdk_nvme_perf/perf.o 00:03:35.416 TEST_HEADER include/spdk/base64.h 00:03:35.416 TEST_HEADER include/spdk/bdev.h 00:03:35.416 TEST_HEADER include/spdk/bdev_module.h 00:03:35.416 TEST_HEADER include/spdk/bdev_zone.h 00:03:35.416 TEST_HEADER include/spdk/bit_array.h 00:03:35.416 TEST_HEADER include/spdk/bit_pool.h 00:03:35.416 TEST_HEADER include/spdk/blob_bdev.h 00:03:35.416 LINK hello_sock 00:03:35.416 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:35.416 TEST_HEADER include/spdk/blobfs.h 00:03:35.416 TEST_HEADER include/spdk/blob.h 00:03:35.416 TEST_HEADER include/spdk/conf.h 00:03:35.416 TEST_HEADER include/spdk/config.h 00:03:35.416 TEST_HEADER include/spdk/cpuset.h 00:03:35.416 TEST_HEADER include/spdk/crc16.h 00:03:35.416 TEST_HEADER include/spdk/crc32.h 00:03:35.416 TEST_HEADER include/spdk/crc64.h 00:03:35.416 TEST_HEADER include/spdk/dif.h 00:03:35.416 TEST_HEADER include/spdk/dma.h 00:03:35.416 TEST_HEADER include/spdk/endian.h 00:03:35.416 TEST_HEADER include/spdk/env_dpdk.h 00:03:35.416 TEST_HEADER include/spdk/env.h 00:03:35.416 TEST_HEADER include/spdk/event.h 00:03:35.416 TEST_HEADER include/spdk/fd_group.h 00:03:35.416 TEST_HEADER include/spdk/fd.h 00:03:35.416 TEST_HEADER include/spdk/file.h 00:03:35.416 TEST_HEADER include/spdk/fsdev.h 00:03:35.416 TEST_HEADER include/spdk/fsdev_module.h 00:03:35.416 TEST_HEADER include/spdk/ftl.h 00:03:35.416 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:35.416 TEST_HEADER include/spdk/gpt_spec.h 00:03:35.416 TEST_HEADER include/spdk/hexlify.h 00:03:35.416 TEST_HEADER include/spdk/histogram_data.h 00:03:35.416 TEST_HEADER include/spdk/idxd.h 00:03:35.416 TEST_HEADER include/spdk/idxd_spec.h 00:03:35.416 TEST_HEADER include/spdk/init.h 00:03:35.416 TEST_HEADER include/spdk/ioat.h 00:03:35.416 TEST_HEADER include/spdk/ioat_spec.h 00:03:35.416 TEST_HEADER include/spdk/iscsi_spec.h 00:03:35.416 TEST_HEADER include/spdk/json.h 00:03:35.416 TEST_HEADER include/spdk/jsonrpc.h 00:03:35.416 TEST_HEADER include/spdk/keyring.h 00:03:35.416 TEST_HEADER include/spdk/keyring_module.h 00:03:35.416 TEST_HEADER include/spdk/likely.h 00:03:35.416 TEST_HEADER include/spdk/log.h 00:03:35.416 TEST_HEADER include/spdk/lvol.h 00:03:35.416 TEST_HEADER include/spdk/md5.h 00:03:35.416 TEST_HEADER include/spdk/memory.h 00:03:35.416 TEST_HEADER include/spdk/mmio.h 00:03:35.416 TEST_HEADER include/spdk/nbd.h 00:03:35.416 TEST_HEADER include/spdk/net.h 00:03:35.416 TEST_HEADER include/spdk/notify.h 00:03:35.416 TEST_HEADER include/spdk/nvme.h 00:03:35.416 TEST_HEADER include/spdk/nvme_intel.h 00:03:35.416 CC app/spdk_nvme_identify/identify.o 00:03:35.416 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:35.416 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:35.416 TEST_HEADER include/spdk/nvme_spec.h 00:03:35.416 LINK nvme_fuzz 00:03:35.416 TEST_HEADER include/spdk/nvme_zns.h 00:03:35.416 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:35.416 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:35.416 TEST_HEADER include/spdk/nvmf.h 00:03:35.416 TEST_HEADER include/spdk/nvmf_spec.h 00:03:35.416 TEST_HEADER include/spdk/nvmf_transport.h 00:03:35.416 TEST_HEADER include/spdk/opal.h 00:03:35.416 TEST_HEADER include/spdk/opal_spec.h 00:03:35.416 TEST_HEADER include/spdk/pci_ids.h 00:03:35.416 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:35.416 TEST_HEADER include/spdk/pipe.h 00:03:35.416 TEST_HEADER include/spdk/queue.h 00:03:35.416 TEST_HEADER include/spdk/reduce.h 00:03:35.416 TEST_HEADER include/spdk/rpc.h 00:03:35.416 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.416 TEST_HEADER include/spdk/scheduler.h 00:03:35.416 TEST_HEADER include/spdk/scsi.h 00:03:35.416 TEST_HEADER include/spdk/scsi_spec.h 00:03:35.416 TEST_HEADER include/spdk/sock.h 00:03:35.416 TEST_HEADER include/spdk/stdinc.h 00:03:35.416 TEST_HEADER include/spdk/string.h 00:03:35.416 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:35.676 TEST_HEADER include/spdk/thread.h 00:03:35.676 TEST_HEADER include/spdk/trace.h 00:03:35.676 TEST_HEADER include/spdk/trace_parser.h 00:03:35.676 TEST_HEADER include/spdk/tree.h 00:03:35.676 TEST_HEADER include/spdk/ublk.h 00:03:35.676 TEST_HEADER include/spdk/util.h 00:03:35.676 TEST_HEADER include/spdk/uuid.h 00:03:35.676 TEST_HEADER include/spdk/version.h 00:03:35.676 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:35.676 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:35.676 TEST_HEADER include/spdk/vhost.h 00:03:35.676 TEST_HEADER include/spdk/vmd.h 00:03:35.676 TEST_HEADER include/spdk/xor.h 00:03:35.676 TEST_HEADER include/spdk/zipf.h 00:03:35.676 CXX test/cpp_headers/accel.o 00:03:35.676 CC test/env/mem_callbacks/mem_callbacks.o 00:03:35.676 CC test/event/event_perf/event_perf.o 00:03:35.676 LINK lsvmd 00:03:35.676 CXX test/cpp_headers/accel_module.o 00:03:35.676 CC test/event/reactor/reactor.o 00:03:35.676 CC test/event/reactor_perf/reactor_perf.o 00:03:35.935 LINK event_perf 00:03:35.935 LINK reactor 00:03:35.935 LINK reactor_perf 00:03:35.935 CC examples/vmd/led/led.o 00:03:35.935 CXX test/cpp_headers/assert.o 00:03:35.935 CXX test/cpp_headers/barrier.o 00:03:35.935 LINK vhost_fuzz 00:03:35.935 CXX test/cpp_headers/base64.o 00:03:35.935 LINK led 00:03:36.194 CXX test/cpp_headers/bdev.o 00:03:36.194 LINK mem_callbacks 00:03:36.194 CC test/event/app_repeat/app_repeat.o 00:03:36.194 CC test/event/scheduler/scheduler.o 00:03:36.194 CXX test/cpp_headers/bdev_module.o 00:03:36.194 CC test/env/vtophys/vtophys.o 00:03:36.194 LINK spdk_nvme_perf 00:03:36.194 LINK app_repeat 00:03:36.454 CC examples/idxd/perf/perf.o 00:03:36.454 LINK vtophys 00:03:36.454 LINK scheduler 00:03:36.454 LINK spdk_nvme_identify 00:03:36.454 CXX test/cpp_headers/bdev_zone.o 00:03:36.454 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:36.454 CC test/nvme/aer/aer.o 00:03:36.454 CC test/rpc_client/rpc_client_test.o 00:03:36.454 CXX test/cpp_headers/bit_array.o 00:03:36.454 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:36.713 CC app/spdk_nvme_discover/discovery_aer.o 00:03:36.713 CC test/accel/dif/dif.o 00:03:36.713 LINK rpc_client_test 00:03:36.713 LINK hello_fsdev 00:03:36.713 LINK idxd_perf 00:03:36.713 LINK aer 00:03:36.713 CXX test/cpp_headers/bit_pool.o 00:03:36.713 CC test/blobfs/mkfs/mkfs.o 00:03:36.713 LINK env_dpdk_post_init 00:03:36.713 LINK spdk_nvme_discover 00:03:36.972 CXX test/cpp_headers/blob_bdev.o 00:03:36.972 LINK mkfs 00:03:36.973 CC test/nvme/reset/reset.o 00:03:36.973 CC test/env/memory/memory_ut.o 00:03:36.973 CXX test/cpp_headers/blobfs_bdev.o 00:03:36.973 CC app/spdk_top/spdk_top.o 00:03:36.973 CC examples/accel/perf/accel_perf.o 00:03:36.973 CC test/lvol/esnap/esnap.o 00:03:36.973 CC examples/blob/hello_world/hello_blob.o 00:03:37.232 LINK iscsi_fuzz 00:03:37.232 CXX test/cpp_headers/blobfs.o 00:03:37.232 CC examples/blob/cli/blobcli.o 00:03:37.232 LINK reset 00:03:37.232 LINK hello_blob 00:03:37.232 LINK dif 00:03:37.491 CXX test/cpp_headers/blob.o 00:03:37.491 CC app/vhost/vhost.o 00:03:37.491 CC test/nvme/sgl/sgl.o 00:03:37.491 CXX test/cpp_headers/conf.o 00:03:37.491 LINK accel_perf 00:03:37.750 CC app/spdk_dd/spdk_dd.o 00:03:37.750 CXX test/cpp_headers/config.o 00:03:37.750 CC examples/nvme/hello_world/hello_world.o 00:03:37.750 LINK vhost 00:03:37.750 CXX test/cpp_headers/cpuset.o 00:03:37.750 LINK blobcli 00:03:37.750 LINK sgl 00:03:37.750 CXX test/cpp_headers/crc16.o 00:03:38.009 CXX test/cpp_headers/crc32.o 00:03:38.009 LINK hello_world 00:03:38.009 CC app/fio/nvme/fio_plugin.o 00:03:38.009 LINK spdk_top 00:03:38.009 CXX test/cpp_headers/crc64.o 00:03:38.009 LINK spdk_dd 00:03:38.009 CC test/nvme/e2edp/nvme_dp.o 00:03:38.009 CXX test/cpp_headers/dif.o 00:03:38.009 CC app/fio/bdev/fio_plugin.o 00:03:38.269 LINK memory_ut 00:03:38.269 CC examples/nvme/reconnect/reconnect.o 00:03:38.269 CXX test/cpp_headers/dma.o 00:03:38.269 CC test/env/pci/pci_ut.o 00:03:38.269 LINK nvme_dp 00:03:38.269 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.269 CXX test/cpp_headers/endian.o 00:03:38.269 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:38.269 CC examples/nvme/arbitration/arbitration.o 00:03:38.527 LINK spdk_nvme 00:03:38.527 CXX test/cpp_headers/env_dpdk.o 00:03:38.527 LINK reconnect 00:03:38.527 LINK hello_bdev 00:03:38.527 CC test/nvme/overhead/overhead.o 00:03:38.527 CXX test/cpp_headers/env.o 00:03:38.527 LINK pci_ut 00:03:38.527 LINK spdk_bdev 00:03:38.785 CXX test/cpp_headers/event.o 00:03:38.785 LINK arbitration 00:03:38.785 CC examples/nvme/hotplug/hotplug.o 00:03:38.785 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.785 LINK overhead 00:03:38.785 CC examples/bdev/bdevperf/bdevperf.o 00:03:38.785 CC test/bdev/bdevio/bdevio.o 00:03:38.785 CXX test/cpp_headers/fd_group.o 00:03:38.785 LINK nvme_manage 00:03:39.043 CC examples/nvme/abort/abort.o 00:03:39.043 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:39.043 CXX test/cpp_headers/fd.o 00:03:39.043 LINK cmb_copy 00:03:39.043 CXX test/cpp_headers/file.o 00:03:39.043 LINK hotplug 00:03:39.043 CC test/nvme/err_injection/err_injection.o 00:03:39.043 LINK pmr_persistence 00:03:39.302 CXX test/cpp_headers/fsdev.o 00:03:39.302 CXX test/cpp_headers/fsdev_module.o 00:03:39.302 LINK bdevio 00:03:39.302 CC test/nvme/startup/startup.o 00:03:39.302 CC test/nvme/reserve/reserve.o 00:03:39.302 LINK err_injection 00:03:39.302 CXX test/cpp_headers/ftl.o 00:03:39.302 LINK abort 00:03:39.302 CXX test/cpp_headers/fuse_dispatcher.o 00:03:39.302 LINK startup 00:03:39.302 CXX test/cpp_headers/gpt_spec.o 00:03:39.302 CXX test/cpp_headers/hexlify.o 00:03:39.561 CC test/nvme/simple_copy/simple_copy.o 00:03:39.561 LINK reserve 00:03:39.561 CXX test/cpp_headers/histogram_data.o 00:03:39.561 CXX test/cpp_headers/idxd.o 00:03:39.561 CXX test/cpp_headers/idxd_spec.o 00:03:39.561 CXX test/cpp_headers/init.o 00:03:39.561 CXX test/cpp_headers/ioat.o 00:03:39.561 CC test/nvme/boot_partition/boot_partition.o 00:03:39.561 CC test/nvme/connect_stress/connect_stress.o 00:03:39.561 CXX test/cpp_headers/ioat_spec.o 00:03:39.561 LINK bdevperf 00:03:39.561 CC test/nvme/compliance/nvme_compliance.o 00:03:39.820 LINK simple_copy 00:03:39.820 CXX test/cpp_headers/iscsi_spec.o 00:03:39.820 LINK boot_partition 00:03:39.820 LINK connect_stress 00:03:39.820 CXX test/cpp_headers/json.o 00:03:39.820 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:39.820 CC test/nvme/fused_ordering/fused_ordering.o 00:03:39.820 CC test/nvme/fdp/fdp.o 00:03:39.820 CXX test/cpp_headers/jsonrpc.o 00:03:40.077 CXX test/cpp_headers/keyring.o 00:03:40.078 CXX test/cpp_headers/keyring_module.o 00:03:40.078 CC test/nvme/cuse/cuse.o 00:03:40.078 LINK doorbell_aers 00:03:40.078 LINK fused_ordering 00:03:40.078 LINK nvme_compliance 00:03:40.078 CC examples/nvmf/nvmf/nvmf.o 00:03:40.078 CXX test/cpp_headers/likely.o 00:03:40.078 CXX test/cpp_headers/log.o 00:03:40.078 CXX test/cpp_headers/lvol.o 00:03:40.078 CXX test/cpp_headers/md5.o 00:03:40.078 CXX test/cpp_headers/memory.o 00:03:40.078 CXX test/cpp_headers/mmio.o 00:03:40.337 CXX test/cpp_headers/nbd.o 00:03:40.337 CXX test/cpp_headers/net.o 00:03:40.337 CXX test/cpp_headers/notify.o 00:03:40.337 LINK fdp 00:03:40.337 CXX test/cpp_headers/nvme.o 00:03:40.337 CXX test/cpp_headers/nvme_intel.o 00:03:40.337 CXX test/cpp_headers/nvme_ocssd.o 00:03:40.337 LINK nvmf 00:03:40.337 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:40.337 CXX test/cpp_headers/nvme_spec.o 00:03:40.337 CXX test/cpp_headers/nvme_zns.o 00:03:40.337 CXX test/cpp_headers/nvmf_cmd.o 00:03:40.337 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:40.595 CXX test/cpp_headers/nvmf.o 00:03:40.595 CXX test/cpp_headers/nvmf_spec.o 00:03:40.595 CXX test/cpp_headers/nvmf_transport.o 00:03:40.595 CXX test/cpp_headers/opal.o 00:03:40.595 CXX test/cpp_headers/opal_spec.o 00:03:40.595 CXX test/cpp_headers/pci_ids.o 00:03:40.595 CXX test/cpp_headers/pipe.o 00:03:40.595 CXX test/cpp_headers/queue.o 00:03:40.595 CXX test/cpp_headers/reduce.o 00:03:40.595 CXX test/cpp_headers/rpc.o 00:03:40.595 CXX test/cpp_headers/scheduler.o 00:03:40.595 CXX test/cpp_headers/scsi.o 00:03:40.853 CXX test/cpp_headers/scsi_spec.o 00:03:40.853 CXX test/cpp_headers/sock.o 00:03:40.853 CXX test/cpp_headers/stdinc.o 00:03:40.853 CXX test/cpp_headers/string.o 00:03:40.853 CXX test/cpp_headers/thread.o 00:03:40.853 CXX test/cpp_headers/trace.o 00:03:40.853 CXX test/cpp_headers/trace_parser.o 00:03:40.853 CXX test/cpp_headers/tree.o 00:03:40.853 CXX test/cpp_headers/ublk.o 00:03:40.853 CXX test/cpp_headers/util.o 00:03:40.853 CXX test/cpp_headers/uuid.o 00:03:40.853 CXX test/cpp_headers/version.o 00:03:40.853 CXX test/cpp_headers/vfio_user_pci.o 00:03:40.853 CXX test/cpp_headers/vfio_user_spec.o 00:03:40.853 CXX test/cpp_headers/vhost.o 00:03:40.853 CXX test/cpp_headers/vmd.o 00:03:41.112 CXX test/cpp_headers/xor.o 00:03:41.112 CXX test/cpp_headers/zipf.o 00:03:41.371 LINK cuse 00:03:42.747 LINK esnap 00:03:43.006 00:03:43.006 real 1m20.643s 00:03:43.006 user 7m2.890s 00:03:43.006 sys 1m46.725s 00:03:43.006 10:40:32 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.006 10:40:32 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.006 ************************************ 00:03:43.006 END TEST make 00:03:43.006 ************************************ 00:03:43.264 10:40:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.264 10:40:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.264 10:40:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.264 10:40:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.264 10:40:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.264 10:40:32 -- pm/common@44 -- $ pid=5287 00:03:43.264 10:40:32 -- pm/common@50 -- $ kill -TERM 5287 00:03:43.264 10:40:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.264 10:40:32 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.264 10:40:32 -- pm/common@44 -- $ pid=5289 00:03:43.264 10:40:32 -- pm/common@50 -- $ kill -TERM 5289 00:03:43.264 10:40:32 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:43.264 10:40:32 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:43.264 10:40:32 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.264 10:40:32 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.264 10:40:32 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:43.264 10:40:32 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:43.264 10:40:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.264 10:40:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.264 10:40:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.264 10:40:32 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.264 10:40:32 -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.264 10:40:32 -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.264 10:40:32 -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.264 10:40:32 -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.264 10:40:32 -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.264 10:40:32 -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.264 10:40:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.264 10:40:32 -- scripts/common.sh@344 -- # case "$op" in 00:03:43.264 10:40:32 -- scripts/common.sh@345 -- # : 1 00:03:43.264 10:40:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.264 10:40:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.264 10:40:32 -- scripts/common.sh@365 -- # decimal 1 00:03:43.264 10:40:32 -- scripts/common.sh@353 -- # local d=1 00:03:43.586 10:40:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.586 10:40:32 -- scripts/common.sh@355 -- # echo 1 00:03:43.586 10:40:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.586 10:40:32 -- scripts/common.sh@366 -- # decimal 2 00:03:43.586 10:40:32 -- scripts/common.sh@353 -- # local d=2 00:03:43.586 10:40:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.586 10:40:32 -- scripts/common.sh@355 -- # echo 2 00:03:43.586 10:40:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.586 10:40:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.586 10:40:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.586 10:40:32 -- scripts/common.sh@368 -- # return 0 00:03:43.586 10:40:32 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.586 10:40:32 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:43.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.586 --rc genhtml_branch_coverage=1 00:03:43.586 --rc genhtml_function_coverage=1 00:03:43.586 --rc genhtml_legend=1 00:03:43.586 --rc geninfo_all_blocks=1 00:03:43.586 --rc geninfo_unexecuted_blocks=1 00:03:43.586 00:03:43.586 ' 00:03:43.586 10:40:32 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:43.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.586 --rc genhtml_branch_coverage=1 00:03:43.586 --rc genhtml_function_coverage=1 00:03:43.586 --rc genhtml_legend=1 00:03:43.586 --rc geninfo_all_blocks=1 00:03:43.586 --rc geninfo_unexecuted_blocks=1 00:03:43.586 00:03:43.586 ' 00:03:43.586 10:40:32 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:43.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.586 --rc genhtml_branch_coverage=1 00:03:43.586 --rc genhtml_function_coverage=1 00:03:43.586 --rc genhtml_legend=1 00:03:43.586 --rc geninfo_all_blocks=1 00:03:43.586 --rc geninfo_unexecuted_blocks=1 00:03:43.586 00:03:43.586 ' 00:03:43.586 10:40:32 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:43.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.586 --rc genhtml_branch_coverage=1 00:03:43.586 --rc genhtml_function_coverage=1 00:03:43.586 --rc genhtml_legend=1 00:03:43.586 --rc geninfo_all_blocks=1 00:03:43.586 --rc geninfo_unexecuted_blocks=1 00:03:43.586 00:03:43.586 ' 00:03:43.586 10:40:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.586 10:40:32 -- nvmf/common.sh@7 -- # uname -s 00:03:43.586 10:40:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.586 10:40:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.586 10:40:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.586 10:40:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.586 10:40:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.586 10:40:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.586 10:40:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.586 10:40:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.586 10:40:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.586 10:40:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.587 10:40:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:024c4b49-b590-476c-8262-62dc32414747 00:03:43.587 10:40:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=024c4b49-b590-476c-8262-62dc32414747 00:03:43.587 10:40:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.587 10:40:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.587 10:40:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:43.587 10:40:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.587 10:40:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:43.587 10:40:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:43.587 10:40:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.587 10:40:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.587 10:40:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.587 10:40:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.587 10:40:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.587 10:40:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.587 10:40:32 -- paths/export.sh@5 -- # export PATH 00:03:43.587 10:40:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.587 10:40:32 -- nvmf/common.sh@51 -- # : 0 00:03:43.587 10:40:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:43.587 10:40:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:43.587 10:40:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.587 10:40:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.587 10:40:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.587 10:40:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:43.587 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:43.587 10:40:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:43.587 10:40:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:43.587 10:40:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:43.587 10:40:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.587 10:40:32 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.587 10:40:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.587 10:40:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.587 10:40:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.587 10:40:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.587 10:40:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.587 10:40:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.587 10:40:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.587 10:40:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.587 10:40:32 -- spdk/autotest.sh@48 -- # udevadm_pid=54714 00:03:43.587 10:40:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.587 10:40:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.587 10:40:32 -- pm/common@17 -- # local monitor 00:03:43.587 10:40:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.587 10:40:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.587 10:40:32 -- pm/common@21 -- # date +%s 00:03:43.587 10:40:32 -- pm/common@25 -- # sleep 1 00:03:43.587 10:40:32 -- pm/common@21 -- # date +%s 00:03:43.587 10:40:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732099232 00:03:43.587 10:40:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732099232 00:03:43.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732099232_collect-cpu-load.pm.log 00:03:43.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732099232_collect-vmstat.pm.log 00:03:44.523 10:40:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.523 10:40:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.523 10:40:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.523 10:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:44.523 10:40:33 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.523 10:40:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:44.523 10:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:44.523 10:40:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:44.523 10:40:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:44.523 10:40:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:44.523 10:40:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.523 10:40:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:44.524 10:40:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.524 10:40:33 -- common/autotest_common.sh@1457 -- # uname 00:03:44.524 10:40:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:44.524 10:40:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.524 10:40:33 -- common/autotest_common.sh@1477 -- # uname 00:03:44.524 10:40:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:44.524 10:40:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.524 10:40:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.782 lcov: LCOV version 1.15 00:03:44.782 10:40:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.726 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.726 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.632 10:41:03 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:14.632 10:41:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.632 10:41:03 -- common/autotest_common.sh@10 -- # set +x 00:04:14.632 10:41:03 -- spdk/autotest.sh@78 -- # rm -f 00:04:14.632 10:41:03 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.455 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:15.455 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:15.714 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:15.714 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:15.714 10:41:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:15.714 10:41:04 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:15.714 10:41:04 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:15.714 10:41:04 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:15.714 10:41:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:15.714 10:41:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:15.714 10:41:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:15.714 10:41:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:15.714 10:41:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:15.714 10:41:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:15.714 10:41:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:15.714 10:41:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 00:04:15.714 10:41:04 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:04:15.714 10:41:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:15.714 10:41:04 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 00:04:15.714 10:41:04 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:04:15.714 10:41:04 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:15.714 10:41:04 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.714 10:41:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:15.714 10:41:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.714 10:41:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.714 10:41:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:15.714 10:41:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:15.714 10:41:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:15.714 No valid GPT data, bailing 00:04:15.714 10:41:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.714 10:41:04 -- scripts/common.sh@394 -- # pt= 00:04:15.714 10:41:04 -- scripts/common.sh@395 -- # return 1 00:04:15.714 10:41:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:15.714 1+0 records in 00:04:15.714 1+0 records out 00:04:15.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177931 s, 58.9 MB/s 00:04:15.714 10:41:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.714 10:41:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.714 10:41:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:15.714 10:41:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:15.714 10:41:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:15.714 No valid GPT data, bailing 00:04:15.714 10:41:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:15.714 10:41:04 -- scripts/common.sh@394 -- # pt= 00:04:15.714 10:41:04 -- scripts/common.sh@395 -- # return 1 00:04:15.714 10:41:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:15.972 1+0 records in 00:04:15.972 1+0 records out 00:04:15.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587501 s, 178 MB/s 00:04:15.972 10:41:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.972 10:41:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.972 10:41:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:15.972 10:41:04 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:15.972 10:41:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:15.972 No valid GPT data, bailing 00:04:15.972 10:41:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:15.972 10:41:05 -- scripts/common.sh@394 -- # pt= 00:04:15.972 10:41:05 -- scripts/common.sh@395 -- # return 1 00:04:15.972 10:41:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:15.972 1+0 records in 00:04:15.972 1+0 records out 00:04:15.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589104 s, 178 MB/s 00:04:15.972 10:41:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.972 10:41:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.972 10:41:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:15.972 10:41:05 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:15.972 10:41:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:15.972 No valid GPT data, bailing 00:04:15.972 10:41:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:15.972 10:41:05 -- scripts/common.sh@394 -- # pt= 00:04:15.972 10:41:05 -- scripts/common.sh@395 -- # return 1 00:04:15.972 10:41:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:15.972 1+0 records in 00:04:15.972 1+0 records out 00:04:15.972 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608333 s, 172 MB/s 00:04:15.972 10:41:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.972 10:41:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.973 10:41:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:15.973 10:41:05 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:15.973 10:41:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:15.973 No valid GPT data, bailing 00:04:15.973 10:41:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:15.973 10:41:05 -- scripts/common.sh@394 -- # pt= 00:04:15.973 10:41:05 -- scripts/common.sh@395 -- # return 1 00:04:15.973 10:41:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:15.973 1+0 records in 00:04:15.973 1+0 records out 00:04:15.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595403 s, 176 MB/s 00:04:15.973 10:41:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.973 10:41:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.973 10:41:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:15.973 10:41:05 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:15.973 10:41:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:16.229 No valid GPT data, bailing 00:04:16.229 10:41:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:16.229 10:41:05 -- scripts/common.sh@394 -- # pt= 00:04:16.229 10:41:05 -- scripts/common.sh@395 -- # return 1 00:04:16.229 10:41:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:16.229 1+0 records in 00:04:16.229 1+0 records out 00:04:16.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00615506 s, 170 MB/s 00:04:16.229 10:41:05 -- spdk/autotest.sh@105 -- # sync 00:04:16.229 10:41:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.229 10:41:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.229 10:41:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:19.513 10:41:08 -- spdk/autotest.sh@111 -- # uname -s 00:04:19.513 10:41:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:19.513 10:41:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:19.513 10:41:08 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:19.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.339 Hugepages 00:04:20.339 node hugesize free / total 00:04:20.339 node0 1048576kB 0 / 0 00:04:20.339 node0 2048kB 0 / 0 00:04:20.339 00:04:20.339 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.598 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:20.598 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:20.857 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:20.857 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:21.119 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:21.119 10:41:10 -- spdk/autotest.sh@117 -- # uname -s 00:04:21.119 10:41:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:21.119 10:41:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:21.119 10:41:10 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.630 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.630 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.630 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.630 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.630 10:41:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:23.568 10:41:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:23.568 10:41:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:23.568 10:41:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:23.568 10:41:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:23.568 10:41:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:23.568 10:41:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:23.568 10:41:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.568 10:41:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:23.568 10:41:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:23.827 10:41:12 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:23.827 10:41:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:23.827 10:41:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.395 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.654 Waiting for block devices as requested 00:04:24.654 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.913 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.913 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.913 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:30.189 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:30.189 10:41:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.189 10:41:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:30.189 10:41:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:30.189 10:41:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:30.189 10:41:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:30.189 10:41:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.189 10:41:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.189 10:41:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.189 10:41:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1543 -- # continue 00:04:30.189 10:41:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.189 10:41:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.189 10:41:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.189 10:41:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.189 10:41:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1543 -- # continue 00:04:30.189 10:41:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.189 10:41:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:30.189 10:41:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.189 10:41:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.189 10:41:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.189 10:41:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.189 10:41:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.189 10:41:19 -- common/autotest_common.sh@1543 -- # continue 00:04:30.189 10:41:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.190 10:41:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:30.190 10:41:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.190 10:41:19 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:30.190 10:41:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:30.190 10:41:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:30.190 10:41:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:30.190 10:41:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:30.190 10:41:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:30.190 10:41:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:30.190 10:41:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:30.190 10:41:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.190 10:41:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.449 10:41:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.449 10:41:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.449 10:41:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.449 10:41:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:30.449 10:41:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.449 10:41:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.449 10:41:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.449 10:41:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.449 10:41:19 -- common/autotest_common.sh@1543 -- # continue 00:04:30.449 10:41:19 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:30.449 10:41:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.449 10:41:19 -- common/autotest_common.sh@10 -- # set +x 00:04:30.449 10:41:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:30.449 10:41:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.449 10:41:19 -- common/autotest_common.sh@10 -- # set +x 00:04:30.449 10:41:19 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.984 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.984 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.984 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.984 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:31.984 10:41:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:31.984 10:41:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.984 10:41:21 -- common/autotest_common.sh@10 -- # set +x 00:04:31.984 10:41:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:31.984 10:41:21 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:31.984 10:41:21 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:31.984 10:41:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:31.984 10:41:21 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:31.984 10:41:21 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:31.984 10:41:21 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:31.984 10:41:21 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:31.984 10:41:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:31.984 10:41:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:31.984 10:41:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.984 10:41:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:31.984 10:41:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.243 10:41:21 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:32.243 10:41:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:32.243 10:41:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.243 10:41:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:32.243 10:41:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.243 10:41:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.243 10:41:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.243 10:41:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:32.243 10:41:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.244 10:41:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.244 10:41:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.244 10:41:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:32.244 10:41:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.244 10:41:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.244 10:41:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.244 10:41:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:32.244 10:41:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.244 10:41:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.244 10:41:21 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:32.244 10:41:21 -- common/autotest_common.sh@1572 -- # return 0 00:04:32.244 10:41:21 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:32.244 10:41:21 -- common/autotest_common.sh@1580 -- # return 0 00:04:32.244 10:41:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:32.244 10:41:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:32.244 10:41:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.244 10:41:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.244 10:41:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:32.244 10:41:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.244 10:41:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.244 10:41:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:32.244 10:41:21 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.244 10:41:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.244 10:41:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.244 10:41:21 -- common/autotest_common.sh@10 -- # set +x 00:04:32.244 ************************************ 00:04:32.244 START TEST env 00:04:32.244 ************************************ 00:04:32.244 10:41:21 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.503 * Looking for test storage... 00:04:32.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.503 10:41:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.503 10:41:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.503 10:41:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.503 10:41:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.503 10:41:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.503 10:41:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.503 10:41:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.503 10:41:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.503 10:41:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.503 10:41:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.503 10:41:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.503 10:41:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:32.503 10:41:21 env -- scripts/common.sh@345 -- # : 1 00:04:32.503 10:41:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.503 10:41:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.503 10:41:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:32.503 10:41:21 env -- scripts/common.sh@353 -- # local d=1 00:04:32.503 10:41:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.503 10:41:21 env -- scripts/common.sh@355 -- # echo 1 00:04:32.503 10:41:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.503 10:41:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:32.503 10:41:21 env -- scripts/common.sh@353 -- # local d=2 00:04:32.503 10:41:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.503 10:41:21 env -- scripts/common.sh@355 -- # echo 2 00:04:32.503 10:41:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.503 10:41:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.503 10:41:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.503 10:41:21 env -- scripts/common.sh@368 -- # return 0 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.503 --rc genhtml_branch_coverage=1 00:04:32.503 --rc genhtml_function_coverage=1 00:04:32.503 --rc genhtml_legend=1 00:04:32.503 --rc geninfo_all_blocks=1 00:04:32.503 --rc geninfo_unexecuted_blocks=1 00:04:32.503 00:04:32.503 ' 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.503 --rc genhtml_branch_coverage=1 00:04:32.503 --rc genhtml_function_coverage=1 00:04:32.503 --rc genhtml_legend=1 00:04:32.503 --rc geninfo_all_blocks=1 00:04:32.503 --rc geninfo_unexecuted_blocks=1 00:04:32.503 00:04:32.503 ' 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.503 --rc genhtml_branch_coverage=1 00:04:32.503 --rc genhtml_function_coverage=1 00:04:32.503 --rc genhtml_legend=1 00:04:32.503 --rc geninfo_all_blocks=1 00:04:32.503 --rc geninfo_unexecuted_blocks=1 00:04:32.503 00:04:32.503 ' 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.503 --rc genhtml_branch_coverage=1 00:04:32.503 --rc genhtml_function_coverage=1 00:04:32.503 --rc genhtml_legend=1 00:04:32.503 --rc geninfo_all_blocks=1 00:04:32.503 --rc geninfo_unexecuted_blocks=1 00:04:32.503 00:04:32.503 ' 00:04:32.503 10:41:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.503 10:41:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.503 10:41:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.503 ************************************ 00:04:32.503 START TEST env_memory 00:04:32.503 ************************************ 00:04:32.503 10:41:21 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:32.503 00:04:32.503 00:04:32.503 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.503 http://cunit.sourceforge.net/ 00:04:32.503 00:04:32.503 00:04:32.503 Suite: memory 00:04:32.503 Test: alloc and free memory map ...[2024-11-20 10:41:21.701472] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.503 passed 00:04:32.503 Test: mem map translation ...[2024-11-20 10:41:21.745721] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.503 [2024-11-20 10:41:21.745869] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.503 [2024-11-20 10:41:21.746052] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.503 [2024-11-20 10:41:21.746112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.762 passed 00:04:32.762 Test: mem map registration ...[2024-11-20 10:41:21.814039] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:32.762 [2024-11-20 10:41:21.814190] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:32.762 passed 00:04:32.763 Test: mem map adjacent registrations ...passed 00:04:32.763 00:04:32.763 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.763 suites 1 1 n/a 0 0 00:04:32.763 tests 4 4 4 0 0 00:04:32.763 asserts 152 152 152 0 n/a 00:04:32.763 00:04:32.763 Elapsed time = 0.239 seconds 00:04:32.763 00:04:32.763 real 0m0.294s 00:04:32.763 user 0m0.245s 00:04:32.763 sys 0m0.039s 00:04:32.763 10:41:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.763 10:41:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:32.763 ************************************ 00:04:32.763 END TEST env_memory 00:04:32.763 ************************************ 00:04:32.763 10:41:21 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:32.763 10:41:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.763 10:41:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.763 10:41:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.763 ************************************ 00:04:32.763 START TEST env_vtophys 00:04:32.763 ************************************ 00:04:32.763 10:41:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.022 EAL: lib.eal log level changed from notice to debug 00:04:33.022 EAL: Detected lcore 0 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 1 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 2 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 3 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 4 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 5 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 6 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 7 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 8 as core 0 on socket 0 00:04:33.022 EAL: Detected lcore 9 as core 0 on socket 0 00:04:33.022 EAL: Maximum logical cores by configuration: 128 00:04:33.022 EAL: Detected CPU lcores: 10 00:04:33.022 EAL: Detected NUMA nodes: 1 00:04:33.022 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:33.022 EAL: Detected shared linkage of DPDK 00:04:33.022 EAL: No shared files mode enabled, IPC will be disabled 00:04:33.022 EAL: Selected IOVA mode 'PA' 00:04:33.022 EAL: Probing VFIO support... 00:04:33.022 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.022 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:33.022 EAL: Ask a virtual area of 0x2e000 bytes 00:04:33.022 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:33.022 EAL: Setting up physically contiguous memory... 00:04:33.022 EAL: Setting maximum number of open files to 524288 00:04:33.022 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:33.022 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:33.022 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.022 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:33.022 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.022 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.022 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:33.022 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:33.022 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.022 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:33.022 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.022 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.022 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:33.022 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:33.022 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.022 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:33.022 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.022 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.022 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:33.022 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:33.022 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.022 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:33.022 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.022 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.022 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:33.022 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:33.022 EAL: Hugepages will be freed exactly as allocated. 00:04:33.022 EAL: No shared files mode enabled, IPC is disabled 00:04:33.022 EAL: No shared files mode enabled, IPC is disabled 00:04:33.022 EAL: TSC frequency is ~2490000 KHz 00:04:33.022 EAL: Main lcore 0 is ready (tid=7fad5789ca40;cpuset=[0]) 00:04:33.022 EAL: Trying to obtain current memory policy. 00:04:33.022 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.022 EAL: Restoring previous memory policy: 0 00:04:33.022 EAL: request: mp_malloc_sync 00:04:33.022 EAL: No shared files mode enabled, IPC is disabled 00:04:33.022 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.022 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.022 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.022 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.022 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:33.022 00:04:33.022 00:04:33.022 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.022 http://cunit.sourceforge.net/ 00:04:33.022 00:04:33.023 00:04:33.023 Suite: components_suite 00:04:33.591 Test: vtophys_malloc_test ...passed 00:04:33.591 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:33.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.591 EAL: Restoring previous memory policy: 4 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was expanded by 4MB 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was shrunk by 4MB 00:04:33.591 EAL: Trying to obtain current memory policy. 00:04:33.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.591 EAL: Restoring previous memory policy: 4 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was expanded by 6MB 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was shrunk by 6MB 00:04:33.591 EAL: Trying to obtain current memory policy. 00:04:33.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.591 EAL: Restoring previous memory policy: 4 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was expanded by 10MB 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was shrunk by 10MB 00:04:33.591 EAL: Trying to obtain current memory policy. 00:04:33.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.591 EAL: Restoring previous memory policy: 4 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was expanded by 18MB 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was shrunk by 18MB 00:04:33.591 EAL: Trying to obtain current memory policy. 00:04:33.591 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.591 EAL: Restoring previous memory policy: 4 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was expanded by 34MB 00:04:33.591 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.591 EAL: request: mp_malloc_sync 00:04:33.591 EAL: No shared files mode enabled, IPC is disabled 00:04:33.591 EAL: Heap on socket 0 was shrunk by 34MB 00:04:33.850 EAL: Trying to obtain current memory policy. 00:04:33.850 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.850 EAL: Restoring previous memory policy: 4 00:04:33.850 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.850 EAL: request: mp_malloc_sync 00:04:33.850 EAL: No shared files mode enabled, IPC is disabled 00:04:33.850 EAL: Heap on socket 0 was expanded by 66MB 00:04:33.850 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.850 EAL: request: mp_malloc_sync 00:04:33.850 EAL: No shared files mode enabled, IPC is disabled 00:04:33.850 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.110 EAL: Trying to obtain current memory policy. 00:04:34.110 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.110 EAL: Restoring previous memory policy: 4 00:04:34.110 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.110 EAL: request: mp_malloc_sync 00:04:34.110 EAL: No shared files mode enabled, IPC is disabled 00:04:34.110 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.110 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.369 EAL: request: mp_malloc_sync 00:04:34.369 EAL: No shared files mode enabled, IPC is disabled 00:04:34.369 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.369 EAL: Trying to obtain current memory policy. 00:04:34.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.629 EAL: Restoring previous memory policy: 4 00:04:34.629 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.629 EAL: request: mp_malloc_sync 00:04:34.629 EAL: No shared files mode enabled, IPC is disabled 00:04:34.629 EAL: Heap on socket 0 was expanded by 258MB 00:04:34.889 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.889 EAL: request: mp_malloc_sync 00:04:34.889 EAL: No shared files mode enabled, IPC is disabled 00:04:34.889 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.476 EAL: Trying to obtain current memory policy. 00:04:35.476 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.476 EAL: Restoring previous memory policy: 4 00:04:35.476 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.476 EAL: request: mp_malloc_sync 00:04:35.476 EAL: No shared files mode enabled, IPC is disabled 00:04:35.476 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.413 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.413 EAL: request: mp_malloc_sync 00:04:36.413 EAL: No shared files mode enabled, IPC is disabled 00:04:36.413 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.350 EAL: Trying to obtain current memory policy. 00:04:37.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.609 EAL: Restoring previous memory policy: 4 00:04:37.609 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.609 EAL: request: mp_malloc_sync 00:04:37.609 EAL: No shared files mode enabled, IPC is disabled 00:04:37.609 EAL: Heap on socket 0 was expanded by 1026MB 00:04:39.512 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.512 EAL: request: mp_malloc_sync 00:04:39.512 EAL: No shared files mode enabled, IPC is disabled 00:04:39.512 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:41.414 passed 00:04:41.414 00:04:41.414 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.414 suites 1 1 n/a 0 0 00:04:41.414 tests 2 2 2 0 0 00:04:41.414 asserts 5733 5733 5733 0 n/a 00:04:41.414 00:04:41.414 Elapsed time = 7.935 seconds 00:04:41.414 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.414 EAL: request: mp_malloc_sync 00:04:41.414 EAL: No shared files mode enabled, IPC is disabled 00:04:41.414 EAL: Heap on socket 0 was shrunk by 2MB 00:04:41.414 EAL: No shared files mode enabled, IPC is disabled 00:04:41.414 EAL: No shared files mode enabled, IPC is disabled 00:04:41.414 EAL: No shared files mode enabled, IPC is disabled 00:04:41.414 00:04:41.414 real 0m8.273s 00:04:41.414 user 0m7.283s 00:04:41.414 sys 0m0.832s 00:04:41.414 ************************************ 00:04:41.414 END TEST env_vtophys 00:04:41.414 ************************************ 00:04:41.414 10:41:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.414 10:41:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:41.414 10:41:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.414 10:41:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.414 10:41:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.414 10:41:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.414 ************************************ 00:04:41.414 START TEST env_pci 00:04:41.414 ************************************ 00:04:41.414 10:41:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.414 00:04:41.414 00:04:41.414 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.414 http://cunit.sourceforge.net/ 00:04:41.414 00:04:41.414 00:04:41.414 Suite: pci 00:04:41.414 Test: pci_hook ...[2024-11-20 10:41:30.393676] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57541 has claimed it 00:04:41.414 EAL: Cannot find device (10000:00:01.0) 00:04:41.414 EAL: Failed to attach device on primary process 00:04:41.414 passed 00:04:41.414 00:04:41.414 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.414 suites 1 1 n/a 0 0 00:04:41.414 tests 1 1 1 0 0 00:04:41.414 asserts 25 25 25 0 n/a 00:04:41.414 00:04:41.414 Elapsed time = 0.012 seconds 00:04:41.414 00:04:41.414 real 0m0.125s 00:04:41.414 user 0m0.056s 00:04:41.414 sys 0m0.065s 00:04:41.414 10:41:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.414 10:41:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:41.414 ************************************ 00:04:41.414 END TEST env_pci 00:04:41.414 ************************************ 00:04:41.414 10:41:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:41.414 10:41:30 env -- env/env.sh@15 -- # uname 00:04:41.414 10:41:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:41.414 10:41:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:41.414 10:41:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.414 10:41:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:41.414 10:41:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.414 10:41:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.414 ************************************ 00:04:41.414 START TEST env_dpdk_post_init 00:04:41.414 ************************************ 00:04:41.414 10:41:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.414 EAL: Detected CPU lcores: 10 00:04:41.414 EAL: Detected NUMA nodes: 1 00:04:41.414 EAL: Detected shared linkage of DPDK 00:04:41.414 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.414 EAL: Selected IOVA mode 'PA' 00:04:41.672 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.672 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:41.672 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:41.672 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:41.672 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:41.672 Starting DPDK initialization... 00:04:41.672 Starting SPDK post initialization... 00:04:41.672 SPDK NVMe probe 00:04:41.672 Attaching to 0000:00:10.0 00:04:41.672 Attaching to 0000:00:11.0 00:04:41.672 Attaching to 0000:00:12.0 00:04:41.672 Attaching to 0000:00:13.0 00:04:41.672 Attached to 0000:00:10.0 00:04:41.672 Attached to 0000:00:11.0 00:04:41.672 Attached to 0000:00:13.0 00:04:41.672 Attached to 0000:00:12.0 00:04:41.672 Cleaning up... 00:04:41.672 ************************************ 00:04:41.672 END TEST env_dpdk_post_init 00:04:41.672 ************************************ 00:04:41.672 00:04:41.672 real 0m0.299s 00:04:41.672 user 0m0.098s 00:04:41.672 sys 0m0.104s 00:04:41.672 10:41:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.672 10:41:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.672 10:41:30 env -- env/env.sh@26 -- # uname 00:04:41.672 10:41:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.672 10:41:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.672 10:41:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.672 10:41:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.672 10:41:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.931 ************************************ 00:04:41.931 START TEST env_mem_callbacks 00:04:41.931 ************************************ 00:04:41.931 10:41:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.931 EAL: Detected CPU lcores: 10 00:04:41.931 EAL: Detected NUMA nodes: 1 00:04:41.931 EAL: Detected shared linkage of DPDK 00:04:41.931 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.931 EAL: Selected IOVA mode 'PA' 00:04:41.931 00:04:41.931 00:04:41.931 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.931 http://cunit.sourceforge.net/ 00:04:41.931 00:04:41.931 00:04:41.931 Suite: memory 00:04:41.931 Test: test ... 00:04:41.931 register 0x200000200000 2097152 00:04:41.931 malloc 3145728 00:04:41.931 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.931 register 0x200000400000 4194304 00:04:41.931 buf 0x2000004fffc0 len 3145728 PASSED 00:04:41.931 malloc 64 00:04:41.931 buf 0x2000004ffec0 len 64 PASSED 00:04:41.931 malloc 4194304 00:04:41.931 register 0x200000800000 6291456 00:04:41.931 buf 0x2000009fffc0 len 4194304 PASSED 00:04:41.931 free 0x2000004fffc0 3145728 00:04:41.931 free 0x2000004ffec0 64 00:04:41.931 unregister 0x200000400000 4194304 PASSED 00:04:41.931 free 0x2000009fffc0 4194304 00:04:41.931 unregister 0x200000800000 6291456 PASSED 00:04:41.931 malloc 8388608 00:04:41.931 register 0x200000400000 10485760 00:04:41.931 buf 0x2000005fffc0 len 8388608 PASSED 00:04:41.931 free 0x2000005fffc0 8388608 00:04:42.189 unregister 0x200000400000 10485760 PASSED 00:04:42.189 passed 00:04:42.189 00:04:42.189 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.189 suites 1 1 n/a 0 0 00:04:42.189 tests 1 1 1 0 0 00:04:42.189 asserts 15 15 15 0 n/a 00:04:42.189 00:04:42.189 Elapsed time = 0.081 seconds 00:04:42.189 00:04:42.189 real 0m0.293s 00:04:42.189 user 0m0.115s 00:04:42.189 sys 0m0.074s 00:04:42.189 10:41:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.189 10:41:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.189 ************************************ 00:04:42.189 END TEST env_mem_callbacks 00:04:42.189 ************************************ 00:04:42.189 00:04:42.189 real 0m9.908s 00:04:42.189 user 0m8.042s 00:04:42.189 sys 0m1.486s 00:04:42.189 10:41:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.189 10:41:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.189 ************************************ 00:04:42.189 END TEST env 00:04:42.189 ************************************ 00:04:42.189 10:41:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.189 10:41:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.189 10:41:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.189 10:41:31 -- common/autotest_common.sh@10 -- # set +x 00:04:42.189 ************************************ 00:04:42.189 START TEST rpc 00:04:42.189 ************************************ 00:04:42.189 10:41:31 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.448 * Looking for test storage... 00:04:42.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.448 10:41:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.448 10:41:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.448 10:41:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.448 10:41:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.448 10:41:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.448 10:41:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.448 10:41:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.448 10:41:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.448 10:41:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.448 10:41:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.448 10:41:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.448 10:41:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.448 10:41:31 rpc -- scripts/common.sh@345 -- # : 1 00:04:42.448 10:41:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.448 10:41:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.448 10:41:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.448 10:41:31 rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.448 10:41:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.448 10:41:31 rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.448 10:41:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.448 10:41:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.448 10:41:31 rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.448 10:41:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.448 10:41:31 rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.448 10:41:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.448 10:41:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.448 10:41:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.448 10:41:31 rpc -- scripts/common.sh@368 -- # return 0 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.448 --rc genhtml_branch_coverage=1 00:04:42.448 --rc genhtml_function_coverage=1 00:04:42.448 --rc genhtml_legend=1 00:04:42.448 --rc geninfo_all_blocks=1 00:04:42.448 --rc geninfo_unexecuted_blocks=1 00:04:42.448 00:04:42.448 ' 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.448 --rc genhtml_branch_coverage=1 00:04:42.448 --rc genhtml_function_coverage=1 00:04:42.448 --rc genhtml_legend=1 00:04:42.448 --rc geninfo_all_blocks=1 00:04:42.448 --rc geninfo_unexecuted_blocks=1 00:04:42.448 00:04:42.448 ' 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.448 --rc genhtml_branch_coverage=1 00:04:42.448 --rc genhtml_function_coverage=1 00:04:42.448 --rc genhtml_legend=1 00:04:42.448 --rc geninfo_all_blocks=1 00:04:42.448 --rc geninfo_unexecuted_blocks=1 00:04:42.448 00:04:42.448 ' 00:04:42.448 10:41:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.449 --rc genhtml_branch_coverage=1 00:04:42.449 --rc genhtml_function_coverage=1 00:04:42.449 --rc genhtml_legend=1 00:04:42.449 --rc geninfo_all_blocks=1 00:04:42.449 --rc geninfo_unexecuted_blocks=1 00:04:42.449 00:04:42.449 ' 00:04:42.449 10:41:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57674 00:04:42.449 10:41:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.449 10:41:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57674 00:04:42.449 10:41:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 57674 ']' 00:04:42.449 10:41:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.449 10:41:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.449 10:41:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.449 10:41:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.449 10:41:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:42.449 10:41:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.708 [2024-11-20 10:41:31.722223] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:04:42.709 [2024-11-20 10:41:31.722589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57674 ] 00:04:42.709 [2024-11-20 10:41:31.912610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.968 [2024-11-20 10:41:32.023564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.968 [2024-11-20 10:41:32.023859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57674' to capture a snapshot of events at runtime. 00:04:42.968 [2024-11-20 10:41:32.023962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.968 [2024-11-20 10:41:32.024017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.968 [2024-11-20 10:41:32.024047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57674 for offline analysis/debug. 00:04:42.968 [2024-11-20 10:41:32.025367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.906 10:41:32 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.906 10:41:32 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:43.906 10:41:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.906 10:41:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.906 10:41:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.906 10:41:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.906 10:41:32 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.906 10:41:32 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.906 10:41:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.906 ************************************ 00:04:43.906 START TEST rpc_integrity 00:04:43.906 ************************************ 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:43.906 10:41:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.906 10:41:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.906 10:41:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.906 10:41:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.906 10:41:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.906 10:41:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.906 10:41:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.906 10:41:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.906 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.906 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.906 { 00:04:43.906 "name": "Malloc0", 00:04:43.906 "aliases": [ 00:04:43.906 "8ebbb34f-9c02-4e69-a28a-925fe25fc732" 00:04:43.906 ], 00:04:43.906 "product_name": "Malloc disk", 00:04:43.906 "block_size": 512, 00:04:43.906 "num_blocks": 16384, 00:04:43.906 "uuid": "8ebbb34f-9c02-4e69-a28a-925fe25fc732", 00:04:43.906 "assigned_rate_limits": { 00:04:43.906 "rw_ios_per_sec": 0, 00:04:43.906 "rw_mbytes_per_sec": 0, 00:04:43.906 "r_mbytes_per_sec": 0, 00:04:43.906 "w_mbytes_per_sec": 0 00:04:43.906 }, 00:04:43.906 "claimed": false, 00:04:43.906 "zoned": false, 00:04:43.906 "supported_io_types": { 00:04:43.906 "read": true, 00:04:43.906 "write": true, 00:04:43.906 "unmap": true, 00:04:43.906 "flush": true, 00:04:43.906 "reset": true, 00:04:43.906 "nvme_admin": false, 00:04:43.906 "nvme_io": false, 00:04:43.906 "nvme_io_md": false, 00:04:43.906 "write_zeroes": true, 00:04:43.906 "zcopy": true, 00:04:43.906 "get_zone_info": false, 00:04:43.906 "zone_management": false, 00:04:43.906 "zone_append": false, 00:04:43.906 "compare": false, 00:04:43.906 "compare_and_write": false, 00:04:43.906 "abort": true, 00:04:43.906 "seek_hole": false, 00:04:43.906 "seek_data": false, 00:04:43.906 "copy": true, 00:04:43.906 "nvme_iov_md": false 00:04:43.906 }, 00:04:43.906 "memory_domains": [ 00:04:43.906 { 00:04:43.906 "dma_device_id": "system", 00:04:43.906 "dma_device_type": 1 00:04:43.906 }, 00:04:43.906 { 00:04:43.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.906 "dma_device_type": 2 00:04:43.906 } 00:04:43.906 ], 00:04:43.906 "driver_specific": {} 00:04:43.906 } 00:04:43.906 ]' 00:04:43.906 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.906 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.906 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.906 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.906 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.906 [2024-11-20 10:41:33.053451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.906 [2024-11-20 10:41:33.053523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.906 [2024-11-20 10:41:33.053571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:43.906 [2024-11-20 10:41:33.053590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.907 [2024-11-20 10:41:33.056285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.907 [2024-11-20 10:41:33.056443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.907 Passthru0 00:04:43.907 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.907 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.907 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.907 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.907 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.907 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.907 { 00:04:43.907 "name": "Malloc0", 00:04:43.907 "aliases": [ 00:04:43.907 "8ebbb34f-9c02-4e69-a28a-925fe25fc732" 00:04:43.907 ], 00:04:43.907 "product_name": "Malloc disk", 00:04:43.907 "block_size": 512, 00:04:43.907 "num_blocks": 16384, 00:04:43.907 "uuid": "8ebbb34f-9c02-4e69-a28a-925fe25fc732", 00:04:43.907 "assigned_rate_limits": { 00:04:43.907 "rw_ios_per_sec": 0, 00:04:43.907 "rw_mbytes_per_sec": 0, 00:04:43.907 "r_mbytes_per_sec": 0, 00:04:43.907 "w_mbytes_per_sec": 0 00:04:43.907 }, 00:04:43.907 "claimed": true, 00:04:43.907 "claim_type": "exclusive_write", 00:04:43.907 "zoned": false, 00:04:43.907 "supported_io_types": { 00:04:43.907 "read": true, 00:04:43.907 "write": true, 00:04:43.907 "unmap": true, 00:04:43.907 "flush": true, 00:04:43.907 "reset": true, 00:04:43.907 "nvme_admin": false, 00:04:43.907 "nvme_io": false, 00:04:43.907 "nvme_io_md": false, 00:04:43.907 "write_zeroes": true, 00:04:43.907 "zcopy": true, 00:04:43.907 "get_zone_info": false, 00:04:43.907 "zone_management": false, 00:04:43.907 "zone_append": false, 00:04:43.907 "compare": false, 00:04:43.907 "compare_and_write": false, 00:04:43.907 "abort": true, 00:04:43.907 "seek_hole": false, 00:04:43.907 "seek_data": false, 00:04:43.907 "copy": true, 00:04:43.907 "nvme_iov_md": false 00:04:43.907 }, 00:04:43.907 "memory_domains": [ 00:04:43.907 { 00:04:43.907 "dma_device_id": "system", 00:04:43.907 "dma_device_type": 1 00:04:43.907 }, 00:04:43.907 { 00:04:43.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.907 "dma_device_type": 2 00:04:43.907 } 00:04:43.907 ], 00:04:43.907 "driver_specific": {} 00:04:43.907 }, 00:04:43.907 { 00:04:43.907 "name": "Passthru0", 00:04:43.907 "aliases": [ 00:04:43.907 "b387e534-d536-5053-9b8e-89e1a65e3826" 00:04:43.907 ], 00:04:43.907 "product_name": "passthru", 00:04:43.907 "block_size": 512, 00:04:43.907 "num_blocks": 16384, 00:04:43.907 "uuid": "b387e534-d536-5053-9b8e-89e1a65e3826", 00:04:43.907 "assigned_rate_limits": { 00:04:43.907 "rw_ios_per_sec": 0, 00:04:43.907 "rw_mbytes_per_sec": 0, 00:04:43.907 "r_mbytes_per_sec": 0, 00:04:43.907 "w_mbytes_per_sec": 0 00:04:43.907 }, 00:04:43.907 "claimed": false, 00:04:43.907 "zoned": false, 00:04:43.907 "supported_io_types": { 00:04:43.907 "read": true, 00:04:43.907 "write": true, 00:04:43.907 "unmap": true, 00:04:43.907 "flush": true, 00:04:43.907 "reset": true, 00:04:43.907 "nvme_admin": false, 00:04:43.907 "nvme_io": false, 00:04:43.907 "nvme_io_md": false, 00:04:43.907 "write_zeroes": true, 00:04:43.907 "zcopy": true, 00:04:43.907 "get_zone_info": false, 00:04:43.907 "zone_management": false, 00:04:43.907 "zone_append": false, 00:04:43.907 "compare": false, 00:04:43.907 "compare_and_write": false, 00:04:43.907 "abort": true, 00:04:43.907 "seek_hole": false, 00:04:43.907 "seek_data": false, 00:04:43.907 "copy": true, 00:04:43.907 "nvme_iov_md": false 00:04:43.907 }, 00:04:43.907 "memory_domains": [ 00:04:43.907 { 00:04:43.907 "dma_device_id": "system", 00:04:43.907 "dma_device_type": 1 00:04:43.907 }, 00:04:43.907 { 00:04:43.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.907 "dma_device_type": 2 00:04:43.907 } 00:04:43.907 ], 00:04:43.907 "driver_specific": { 00:04:43.907 "passthru": { 00:04:43.907 "name": "Passthru0", 00:04:43.907 "base_bdev_name": "Malloc0" 00:04:43.907 } 00:04:43.907 } 00:04:43.907 } 00:04:43.907 ]' 00:04:43.907 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.907 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.907 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.907 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.907 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.167 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.167 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.167 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.167 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.167 ************************************ 00:04:44.167 10:41:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.167 00:04:44.167 real 0m0.352s 00:04:44.167 user 0m0.187s 00:04:44.167 sys 0m0.062s 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.167 10:41:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.167 END TEST rpc_integrity 00:04:44.167 ************************************ 00:04:44.167 10:41:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:44.167 10:41:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.167 10:41:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.167 10:41:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.167 ************************************ 00:04:44.167 START TEST rpc_plugins 00:04:44.167 ************************************ 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:44.167 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.167 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:44.167 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.167 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:44.167 { 00:04:44.167 "name": "Malloc1", 00:04:44.167 "aliases": [ 00:04:44.167 "1eb81db1-8fb8-4852-9d61-926f25018c74" 00:04:44.167 ], 00:04:44.167 "product_name": "Malloc disk", 00:04:44.167 "block_size": 4096, 00:04:44.167 "num_blocks": 256, 00:04:44.167 "uuid": "1eb81db1-8fb8-4852-9d61-926f25018c74", 00:04:44.167 "assigned_rate_limits": { 00:04:44.167 "rw_ios_per_sec": 0, 00:04:44.167 "rw_mbytes_per_sec": 0, 00:04:44.167 "r_mbytes_per_sec": 0, 00:04:44.167 "w_mbytes_per_sec": 0 00:04:44.167 }, 00:04:44.167 "claimed": false, 00:04:44.167 "zoned": false, 00:04:44.167 "supported_io_types": { 00:04:44.167 "read": true, 00:04:44.167 "write": true, 00:04:44.167 "unmap": true, 00:04:44.167 "flush": true, 00:04:44.167 "reset": true, 00:04:44.167 "nvme_admin": false, 00:04:44.167 "nvme_io": false, 00:04:44.167 "nvme_io_md": false, 00:04:44.167 "write_zeroes": true, 00:04:44.167 "zcopy": true, 00:04:44.167 "get_zone_info": false, 00:04:44.167 "zone_management": false, 00:04:44.167 "zone_append": false, 00:04:44.167 "compare": false, 00:04:44.167 "compare_and_write": false, 00:04:44.167 "abort": true, 00:04:44.167 "seek_hole": false, 00:04:44.167 "seek_data": false, 00:04:44.167 "copy": true, 00:04:44.167 "nvme_iov_md": false 00:04:44.167 }, 00:04:44.167 "memory_domains": [ 00:04:44.167 { 00:04:44.167 "dma_device_id": "system", 00:04:44.167 "dma_device_type": 1 00:04:44.167 }, 00:04:44.167 { 00:04:44.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.167 "dma_device_type": 2 00:04:44.167 } 00:04:44.167 ], 00:04:44.167 "driver_specific": {} 00:04:44.167 } 00:04:44.167 ]' 00:04:44.167 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:44.167 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:44.167 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.167 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.167 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.427 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.427 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:44.427 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:44.427 ************************************ 00:04:44.427 END TEST rpc_plugins 00:04:44.427 ************************************ 00:04:44.427 10:41:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:44.427 00:04:44.427 real 0m0.162s 00:04:44.427 user 0m0.088s 00:04:44.427 sys 0m0.031s 00:04:44.427 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.427 10:41:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.427 10:41:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:44.427 10:41:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.427 10:41:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.427 10:41:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.427 ************************************ 00:04:44.427 START TEST rpc_trace_cmd_test 00:04:44.427 ************************************ 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:44.427 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57674", 00:04:44.427 "tpoint_group_mask": "0x8", 00:04:44.427 "iscsi_conn": { 00:04:44.427 "mask": "0x2", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "scsi": { 00:04:44.427 "mask": "0x4", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "bdev": { 00:04:44.427 "mask": "0x8", 00:04:44.427 "tpoint_mask": "0xffffffffffffffff" 00:04:44.427 }, 00:04:44.427 "nvmf_rdma": { 00:04:44.427 "mask": "0x10", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "nvmf_tcp": { 00:04:44.427 "mask": "0x20", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "ftl": { 00:04:44.427 "mask": "0x40", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "blobfs": { 00:04:44.427 "mask": "0x80", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "dsa": { 00:04:44.427 "mask": "0x200", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "thread": { 00:04:44.427 "mask": "0x400", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "nvme_pcie": { 00:04:44.427 "mask": "0x800", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "iaa": { 00:04:44.427 "mask": "0x1000", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "nvme_tcp": { 00:04:44.427 "mask": "0x2000", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "bdev_nvme": { 00:04:44.427 "mask": "0x4000", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "sock": { 00:04:44.427 "mask": "0x8000", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "blob": { 00:04:44.427 "mask": "0x10000", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "bdev_raid": { 00:04:44.427 "mask": "0x20000", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 }, 00:04:44.427 "scheduler": { 00:04:44.427 "mask": "0x40000", 00:04:44.427 "tpoint_mask": "0x0" 00:04:44.427 } 00:04:44.427 }' 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.427 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.687 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.687 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.687 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.687 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.687 10:41:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.687 00:04:44.688 real 0m0.211s 00:04:44.688 user 0m0.160s 00:04:44.688 sys 0m0.041s 00:04:44.688 10:41:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.688 10:41:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.688 ************************************ 00:04:44.688 END TEST rpc_trace_cmd_test 00:04:44.688 ************************************ 00:04:44.688 10:41:33 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.688 10:41:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.688 10:41:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.688 10:41:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.688 10:41:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.688 10:41:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.688 ************************************ 00:04:44.688 START TEST rpc_daemon_integrity 00:04:44.688 ************************************ 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.688 { 00:04:44.688 "name": "Malloc2", 00:04:44.688 "aliases": [ 00:04:44.688 "89f98237-0d5f-4232-b11a-df15511f80cb" 00:04:44.688 ], 00:04:44.688 "product_name": "Malloc disk", 00:04:44.688 "block_size": 512, 00:04:44.688 "num_blocks": 16384, 00:04:44.688 "uuid": "89f98237-0d5f-4232-b11a-df15511f80cb", 00:04:44.688 "assigned_rate_limits": { 00:04:44.688 "rw_ios_per_sec": 0, 00:04:44.688 "rw_mbytes_per_sec": 0, 00:04:44.688 "r_mbytes_per_sec": 0, 00:04:44.688 "w_mbytes_per_sec": 0 00:04:44.688 }, 00:04:44.688 "claimed": false, 00:04:44.688 "zoned": false, 00:04:44.688 "supported_io_types": { 00:04:44.688 "read": true, 00:04:44.688 "write": true, 00:04:44.688 "unmap": true, 00:04:44.688 "flush": true, 00:04:44.688 "reset": true, 00:04:44.688 "nvme_admin": false, 00:04:44.688 "nvme_io": false, 00:04:44.688 "nvme_io_md": false, 00:04:44.688 "write_zeroes": true, 00:04:44.688 "zcopy": true, 00:04:44.688 "get_zone_info": false, 00:04:44.688 "zone_management": false, 00:04:44.688 "zone_append": false, 00:04:44.688 "compare": false, 00:04:44.688 "compare_and_write": false, 00:04:44.688 "abort": true, 00:04:44.688 "seek_hole": false, 00:04:44.688 "seek_data": false, 00:04:44.688 "copy": true, 00:04:44.688 "nvme_iov_md": false 00:04:44.688 }, 00:04:44.688 "memory_domains": [ 00:04:44.688 { 00:04:44.688 "dma_device_id": "system", 00:04:44.688 "dma_device_type": 1 00:04:44.688 }, 00:04:44.688 { 00:04:44.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.688 "dma_device_type": 2 00:04:44.688 } 00:04:44.688 ], 00:04:44.688 "driver_specific": {} 00:04:44.688 } 00:04:44.688 ]' 00:04:44.688 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.948 [2024-11-20 10:41:33.964819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.948 [2024-11-20 10:41:33.964986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.948 [2024-11-20 10:41:33.965015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:44.948 [2024-11-20 10:41:33.965030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.948 [2024-11-20 10:41:33.967624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.948 [2024-11-20 10:41:33.967667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.948 Passthru0 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.948 { 00:04:44.948 "name": "Malloc2", 00:04:44.948 "aliases": [ 00:04:44.948 "89f98237-0d5f-4232-b11a-df15511f80cb" 00:04:44.948 ], 00:04:44.948 "product_name": "Malloc disk", 00:04:44.948 "block_size": 512, 00:04:44.948 "num_blocks": 16384, 00:04:44.948 "uuid": "89f98237-0d5f-4232-b11a-df15511f80cb", 00:04:44.948 "assigned_rate_limits": { 00:04:44.948 "rw_ios_per_sec": 0, 00:04:44.948 "rw_mbytes_per_sec": 0, 00:04:44.948 "r_mbytes_per_sec": 0, 00:04:44.948 "w_mbytes_per_sec": 0 00:04:44.948 }, 00:04:44.948 "claimed": true, 00:04:44.948 "claim_type": "exclusive_write", 00:04:44.948 "zoned": false, 00:04:44.948 "supported_io_types": { 00:04:44.948 "read": true, 00:04:44.948 "write": true, 00:04:44.948 "unmap": true, 00:04:44.948 "flush": true, 00:04:44.948 "reset": true, 00:04:44.948 "nvme_admin": false, 00:04:44.948 "nvme_io": false, 00:04:44.948 "nvme_io_md": false, 00:04:44.948 "write_zeroes": true, 00:04:44.948 "zcopy": true, 00:04:44.948 "get_zone_info": false, 00:04:44.948 "zone_management": false, 00:04:44.948 "zone_append": false, 00:04:44.948 "compare": false, 00:04:44.948 "compare_and_write": false, 00:04:44.948 "abort": true, 00:04:44.948 "seek_hole": false, 00:04:44.948 "seek_data": false, 00:04:44.948 "copy": true, 00:04:44.948 "nvme_iov_md": false 00:04:44.948 }, 00:04:44.948 "memory_domains": [ 00:04:44.948 { 00:04:44.948 "dma_device_id": "system", 00:04:44.948 "dma_device_type": 1 00:04:44.948 }, 00:04:44.948 { 00:04:44.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.948 "dma_device_type": 2 00:04:44.948 } 00:04:44.948 ], 00:04:44.948 "driver_specific": {} 00:04:44.948 }, 00:04:44.948 { 00:04:44.948 "name": "Passthru0", 00:04:44.948 "aliases": [ 00:04:44.948 "9ad87dfc-906f-5177-8158-662d0f22a672" 00:04:44.948 ], 00:04:44.948 "product_name": "passthru", 00:04:44.948 "block_size": 512, 00:04:44.948 "num_blocks": 16384, 00:04:44.948 "uuid": "9ad87dfc-906f-5177-8158-662d0f22a672", 00:04:44.948 "assigned_rate_limits": { 00:04:44.948 "rw_ios_per_sec": 0, 00:04:44.948 "rw_mbytes_per_sec": 0, 00:04:44.948 "r_mbytes_per_sec": 0, 00:04:44.948 "w_mbytes_per_sec": 0 00:04:44.948 }, 00:04:44.948 "claimed": false, 00:04:44.948 "zoned": false, 00:04:44.948 "supported_io_types": { 00:04:44.948 "read": true, 00:04:44.948 "write": true, 00:04:44.948 "unmap": true, 00:04:44.948 "flush": true, 00:04:44.948 "reset": true, 00:04:44.948 "nvme_admin": false, 00:04:44.948 "nvme_io": false, 00:04:44.948 "nvme_io_md": false, 00:04:44.948 "write_zeroes": true, 00:04:44.948 "zcopy": true, 00:04:44.948 "get_zone_info": false, 00:04:44.948 "zone_management": false, 00:04:44.948 "zone_append": false, 00:04:44.948 "compare": false, 00:04:44.948 "compare_and_write": false, 00:04:44.948 "abort": true, 00:04:44.948 "seek_hole": false, 00:04:44.948 "seek_data": false, 00:04:44.948 "copy": true, 00:04:44.948 "nvme_iov_md": false 00:04:44.948 }, 00:04:44.948 "memory_domains": [ 00:04:44.948 { 00:04:44.948 "dma_device_id": "system", 00:04:44.948 "dma_device_type": 1 00:04:44.948 }, 00:04:44.948 { 00:04:44.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.948 "dma_device_type": 2 00:04:44.948 } 00:04:44.948 ], 00:04:44.948 "driver_specific": { 00:04:44.948 "passthru": { 00:04:44.948 "name": "Passthru0", 00:04:44.948 "base_bdev_name": "Malloc2" 00:04:44.948 } 00:04:44.948 } 00:04:44.948 } 00:04:44.948 ]' 00:04:44.948 10:41:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.948 ************************************ 00:04:44.948 END TEST rpc_daemon_integrity 00:04:44.948 ************************************ 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.948 00:04:44.948 real 0m0.334s 00:04:44.948 user 0m0.184s 00:04:44.948 sys 0m0.048s 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.948 10:41:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.948 10:41:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.948 10:41:34 rpc -- rpc/rpc.sh@84 -- # killprocess 57674 00:04:44.948 10:41:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 57674 ']' 00:04:44.948 10:41:34 rpc -- common/autotest_common.sh@958 -- # kill -0 57674 00:04:44.948 10:41:34 rpc -- common/autotest_common.sh@959 -- # uname 00:04:45.207 10:41:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.208 10:41:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57674 00:04:45.208 killing process with pid 57674 00:04:45.208 10:41:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.208 10:41:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.208 10:41:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57674' 00:04:45.208 10:41:34 rpc -- common/autotest_common.sh@973 -- # kill 57674 00:04:45.208 10:41:34 rpc -- common/autotest_common.sh@978 -- # wait 57674 00:04:47.743 00:04:47.743 real 0m5.235s 00:04:47.743 user 0m5.637s 00:04:47.743 sys 0m0.984s 00:04:47.743 ************************************ 00:04:47.743 END TEST rpc 00:04:47.743 ************************************ 00:04:47.743 10:41:36 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.743 10:41:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.743 10:41:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.743 10:41:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.743 10:41:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.743 10:41:36 -- common/autotest_common.sh@10 -- # set +x 00:04:47.743 ************************************ 00:04:47.743 START TEST skip_rpc 00:04:47.743 ************************************ 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:47.743 * Looking for test storage... 00:04:47.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.743 10:41:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.743 --rc genhtml_branch_coverage=1 00:04:47.743 --rc genhtml_function_coverage=1 00:04:47.743 --rc genhtml_legend=1 00:04:47.743 --rc geninfo_all_blocks=1 00:04:47.743 --rc geninfo_unexecuted_blocks=1 00:04:47.743 00:04:47.743 ' 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.743 --rc genhtml_branch_coverage=1 00:04:47.743 --rc genhtml_function_coverage=1 00:04:47.743 --rc genhtml_legend=1 00:04:47.743 --rc geninfo_all_blocks=1 00:04:47.743 --rc geninfo_unexecuted_blocks=1 00:04:47.743 00:04:47.743 ' 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.743 --rc genhtml_branch_coverage=1 00:04:47.743 --rc genhtml_function_coverage=1 00:04:47.743 --rc genhtml_legend=1 00:04:47.743 --rc geninfo_all_blocks=1 00:04:47.743 --rc geninfo_unexecuted_blocks=1 00:04:47.743 00:04:47.743 ' 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.743 --rc genhtml_branch_coverage=1 00:04:47.743 --rc genhtml_function_coverage=1 00:04:47.743 --rc genhtml_legend=1 00:04:47.743 --rc geninfo_all_blocks=1 00:04:47.743 --rc geninfo_unexecuted_blocks=1 00:04:47.743 00:04:47.743 ' 00:04:47.743 10:41:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.743 10:41:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:47.743 10:41:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.743 10:41:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.743 ************************************ 00:04:47.743 START TEST skip_rpc 00:04:47.743 ************************************ 00:04:47.743 10:41:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:47.743 10:41:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57903 00:04:47.743 10:41:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:47.743 10:41:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.743 10:41:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:48.002 [2024-11-20 10:41:37.036614] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:04:48.002 [2024-11-20 10:41:37.036917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57903 ] 00:04:48.002 [2024-11-20 10:41:37.224334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.261 [2024-11-20 10:41:37.336634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.534 10:41:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57903 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57903 ']' 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57903 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57903 00:04:53.535 killing process with pid 57903 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57903' 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57903 00:04:53.535 10:41:41 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57903 00:04:55.441 00:04:55.441 real 0m7.369s 00:04:55.441 user 0m6.876s 00:04:55.441 sys 0m0.412s 00:04:55.441 ************************************ 00:04:55.441 END TEST skip_rpc 00:04:55.441 ************************************ 00:04:55.441 10:41:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.441 10:41:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.441 10:41:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:55.441 10:41:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.441 10:41:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.441 10:41:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.441 ************************************ 00:04:55.441 START TEST skip_rpc_with_json 00:04:55.441 ************************************ 00:04:55.441 10:41:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:55.441 10:41:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:55.441 10:41:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58007 00:04:55.441 10:41:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.441 10:41:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.442 10:41:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58007 00:04:55.442 10:41:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58007 ']' 00:04:55.442 10:41:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.442 10:41:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.442 10:41:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.442 10:41:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.442 10:41:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.442 [2024-11-20 10:41:44.468895] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:04:55.442 [2024-11-20 10:41:44.469016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58007 ] 00:04:55.442 [2024-11-20 10:41:44.651483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.700 [2024-11-20 10:41:44.761810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.638 [2024-11-20 10:41:45.587624] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:56.638 request: 00:04:56.638 { 00:04:56.638 "trtype": "tcp", 00:04:56.638 "method": "nvmf_get_transports", 00:04:56.638 "req_id": 1 00:04:56.638 } 00:04:56.638 Got JSON-RPC error response 00:04:56.638 response: 00:04:56.638 { 00:04:56.638 "code": -19, 00:04:56.638 "message": "No such device" 00:04:56.638 } 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.638 [2024-11-20 10:41:45.599741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.638 10:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.638 { 00:04:56.638 "subsystems": [ 00:04:56.638 { 00:04:56.638 "subsystem": "fsdev", 00:04:56.638 "config": [ 00:04:56.638 { 00:04:56.638 "method": "fsdev_set_opts", 00:04:56.638 "params": { 00:04:56.638 "fsdev_io_pool_size": 65535, 00:04:56.638 "fsdev_io_cache_size": 256 00:04:56.638 } 00:04:56.638 } 00:04:56.638 ] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "keyring", 00:04:56.638 "config": [] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "iobuf", 00:04:56.638 "config": [ 00:04:56.638 { 00:04:56.638 "method": "iobuf_set_options", 00:04:56.638 "params": { 00:04:56.638 "small_pool_count": 8192, 00:04:56.638 "large_pool_count": 1024, 00:04:56.638 "small_bufsize": 8192, 00:04:56.638 "large_bufsize": 135168, 00:04:56.638 "enable_numa": false 00:04:56.638 } 00:04:56.638 } 00:04:56.638 ] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "sock", 00:04:56.638 "config": [ 00:04:56.638 { 00:04:56.638 "method": "sock_set_default_impl", 00:04:56.638 "params": { 00:04:56.638 "impl_name": "posix" 00:04:56.638 } 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "method": "sock_impl_set_options", 00:04:56.638 "params": { 00:04:56.638 "impl_name": "ssl", 00:04:56.638 "recv_buf_size": 4096, 00:04:56.638 "send_buf_size": 4096, 00:04:56.638 "enable_recv_pipe": true, 00:04:56.638 "enable_quickack": false, 00:04:56.638 "enable_placement_id": 0, 00:04:56.638 "enable_zerocopy_send_server": true, 00:04:56.638 "enable_zerocopy_send_client": false, 00:04:56.638 "zerocopy_threshold": 0, 00:04:56.638 "tls_version": 0, 00:04:56.638 "enable_ktls": false 00:04:56.638 } 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "method": "sock_impl_set_options", 00:04:56.638 "params": { 00:04:56.638 "impl_name": "posix", 00:04:56.638 "recv_buf_size": 2097152, 00:04:56.638 "send_buf_size": 2097152, 00:04:56.638 "enable_recv_pipe": true, 00:04:56.638 "enable_quickack": false, 00:04:56.638 "enable_placement_id": 0, 00:04:56.638 "enable_zerocopy_send_server": true, 00:04:56.638 "enable_zerocopy_send_client": false, 00:04:56.638 "zerocopy_threshold": 0, 00:04:56.638 "tls_version": 0, 00:04:56.638 "enable_ktls": false 00:04:56.638 } 00:04:56.638 } 00:04:56.638 ] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "vmd", 00:04:56.638 "config": [] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "accel", 00:04:56.638 "config": [ 00:04:56.638 { 00:04:56.638 "method": "accel_set_options", 00:04:56.638 "params": { 00:04:56.638 "small_cache_size": 128, 00:04:56.638 "large_cache_size": 16, 00:04:56.638 "task_count": 2048, 00:04:56.638 "sequence_count": 2048, 00:04:56.638 "buf_count": 2048 00:04:56.638 } 00:04:56.638 } 00:04:56.638 ] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "bdev", 00:04:56.638 "config": [ 00:04:56.638 { 00:04:56.638 "method": "bdev_set_options", 00:04:56.638 "params": { 00:04:56.638 "bdev_io_pool_size": 65535, 00:04:56.638 "bdev_io_cache_size": 256, 00:04:56.638 "bdev_auto_examine": true, 00:04:56.638 "iobuf_small_cache_size": 128, 00:04:56.638 "iobuf_large_cache_size": 16 00:04:56.638 } 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "method": "bdev_raid_set_options", 00:04:56.638 "params": { 00:04:56.638 "process_window_size_kb": 1024, 00:04:56.638 "process_max_bandwidth_mb_sec": 0 00:04:56.638 } 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "method": "bdev_iscsi_set_options", 00:04:56.638 "params": { 00:04:56.638 "timeout_sec": 30 00:04:56.638 } 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "method": "bdev_nvme_set_options", 00:04:56.638 "params": { 00:04:56.638 "action_on_timeout": "none", 00:04:56.638 "timeout_us": 0, 00:04:56.638 "timeout_admin_us": 0, 00:04:56.638 "keep_alive_timeout_ms": 10000, 00:04:56.638 "arbitration_burst": 0, 00:04:56.638 "low_priority_weight": 0, 00:04:56.638 "medium_priority_weight": 0, 00:04:56.638 "high_priority_weight": 0, 00:04:56.638 "nvme_adminq_poll_period_us": 10000, 00:04:56.638 "nvme_ioq_poll_period_us": 0, 00:04:56.638 "io_queue_requests": 0, 00:04:56.638 "delay_cmd_submit": true, 00:04:56.638 "transport_retry_count": 4, 00:04:56.638 "bdev_retry_count": 3, 00:04:56.638 "transport_ack_timeout": 0, 00:04:56.638 "ctrlr_loss_timeout_sec": 0, 00:04:56.638 "reconnect_delay_sec": 0, 00:04:56.638 "fast_io_fail_timeout_sec": 0, 00:04:56.638 "disable_auto_failback": false, 00:04:56.638 "generate_uuids": false, 00:04:56.638 "transport_tos": 0, 00:04:56.638 "nvme_error_stat": false, 00:04:56.638 "rdma_srq_size": 0, 00:04:56.638 "io_path_stat": false, 00:04:56.638 "allow_accel_sequence": false, 00:04:56.638 "rdma_max_cq_size": 0, 00:04:56.638 "rdma_cm_event_timeout_ms": 0, 00:04:56.638 "dhchap_digests": [ 00:04:56.638 "sha256", 00:04:56.638 "sha384", 00:04:56.638 "sha512" 00:04:56.638 ], 00:04:56.638 "dhchap_dhgroups": [ 00:04:56.638 "null", 00:04:56.638 "ffdhe2048", 00:04:56.638 "ffdhe3072", 00:04:56.638 "ffdhe4096", 00:04:56.638 "ffdhe6144", 00:04:56.638 "ffdhe8192" 00:04:56.638 ] 00:04:56.638 } 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "method": "bdev_nvme_set_hotplug", 00:04:56.638 "params": { 00:04:56.638 "period_us": 100000, 00:04:56.638 "enable": false 00:04:56.638 } 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "method": "bdev_wait_for_examine" 00:04:56.638 } 00:04:56.638 ] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "scsi", 00:04:56.638 "config": null 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "scheduler", 00:04:56.638 "config": [ 00:04:56.638 { 00:04:56.638 "method": "framework_set_scheduler", 00:04:56.638 "params": { 00:04:56.638 "name": "static" 00:04:56.638 } 00:04:56.638 } 00:04:56.638 ] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "vhost_scsi", 00:04:56.638 "config": [] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "vhost_blk", 00:04:56.638 "config": [] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "ublk", 00:04:56.638 "config": [] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "nbd", 00:04:56.638 "config": [] 00:04:56.638 }, 00:04:56.638 { 00:04:56.638 "subsystem": "nvmf", 00:04:56.638 "config": [ 00:04:56.638 { 00:04:56.638 "method": "nvmf_set_config", 00:04:56.638 "params": { 00:04:56.638 "discovery_filter": "match_any", 00:04:56.638 "admin_cmd_passthru": { 00:04:56.638 "identify_ctrlr": false 00:04:56.638 }, 00:04:56.638 "dhchap_digests": [ 00:04:56.638 "sha256", 00:04:56.638 "sha384", 00:04:56.638 "sha512" 00:04:56.638 ], 00:04:56.638 "dhchap_dhgroups": [ 00:04:56.638 "null", 00:04:56.638 "ffdhe2048", 00:04:56.638 "ffdhe3072", 00:04:56.638 "ffdhe4096", 00:04:56.639 "ffdhe6144", 00:04:56.639 "ffdhe8192" 00:04:56.639 ] 00:04:56.639 } 00:04:56.639 }, 00:04:56.639 { 00:04:56.639 "method": "nvmf_set_max_subsystems", 00:04:56.639 "params": { 00:04:56.639 "max_subsystems": 1024 00:04:56.639 } 00:04:56.639 }, 00:04:56.639 { 00:04:56.639 "method": "nvmf_set_crdt", 00:04:56.639 "params": { 00:04:56.639 "crdt1": 0, 00:04:56.639 "crdt2": 0, 00:04:56.639 "crdt3": 0 00:04:56.639 } 00:04:56.639 }, 00:04:56.639 { 00:04:56.639 "method": "nvmf_create_transport", 00:04:56.639 "params": { 00:04:56.639 "trtype": "TCP", 00:04:56.639 "max_queue_depth": 128, 00:04:56.639 "max_io_qpairs_per_ctrlr": 127, 00:04:56.639 "in_capsule_data_size": 4096, 00:04:56.639 "max_io_size": 131072, 00:04:56.639 "io_unit_size": 131072, 00:04:56.639 "max_aq_depth": 128, 00:04:56.639 "num_shared_buffers": 511, 00:04:56.639 "buf_cache_size": 4294967295, 00:04:56.639 "dif_insert_or_strip": false, 00:04:56.639 "zcopy": false, 00:04:56.639 "c2h_success": true, 00:04:56.639 "sock_priority": 0, 00:04:56.639 "abort_timeout_sec": 1, 00:04:56.639 "ack_timeout": 0, 00:04:56.639 "data_wr_pool_size": 0 00:04:56.639 } 00:04:56.639 } 00:04:56.639 ] 00:04:56.639 }, 00:04:56.639 { 00:04:56.639 "subsystem": "iscsi", 00:04:56.639 "config": [ 00:04:56.639 { 00:04:56.639 "method": "iscsi_set_options", 00:04:56.639 "params": { 00:04:56.639 "node_base": "iqn.2016-06.io.spdk", 00:04:56.639 "max_sessions": 128, 00:04:56.639 "max_connections_per_session": 2, 00:04:56.639 "max_queue_depth": 64, 00:04:56.639 "default_time2wait": 2, 00:04:56.639 "default_time2retain": 20, 00:04:56.639 "first_burst_length": 8192, 00:04:56.639 "immediate_data": true, 00:04:56.639 "allow_duplicated_isid": false, 00:04:56.639 "error_recovery_level": 0, 00:04:56.639 "nop_timeout": 60, 00:04:56.639 "nop_in_interval": 30, 00:04:56.639 "disable_chap": false, 00:04:56.639 "require_chap": false, 00:04:56.639 "mutual_chap": false, 00:04:56.639 "chap_group": 0, 00:04:56.639 "max_large_datain_per_connection": 64, 00:04:56.639 "max_r2t_per_connection": 4, 00:04:56.639 "pdu_pool_size": 36864, 00:04:56.639 "immediate_data_pool_size": 16384, 00:04:56.639 "data_out_pool_size": 2048 00:04:56.639 } 00:04:56.639 } 00:04:56.639 ] 00:04:56.639 } 00:04:56.639 ] 00:04:56.639 } 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58007 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58007 ']' 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58007 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58007 00:04:56.639 killing process with pid 58007 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58007' 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58007 00:04:56.639 10:41:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58007 00:04:59.170 10:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58063 00:04:59.170 10:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.170 10:41:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58063 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58063 ']' 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58063 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58063 00:05:04.446 killing process with pid 58063 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58063' 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58063 00:05:04.446 10:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58063 00:05:06.362 10:41:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:06.362 10:41:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:06.362 ************************************ 00:05:06.362 END TEST skip_rpc_with_json 00:05:06.362 ************************************ 00:05:06.362 00:05:06.362 real 0m11.202s 00:05:06.362 user 0m10.624s 00:05:06.362 sys 0m0.877s 00:05:06.362 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.362 10:41:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.621 10:41:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:06.621 10:41:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.621 10:41:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.621 10:41:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.621 ************************************ 00:05:06.621 START TEST skip_rpc_with_delay 00:05:06.621 ************************************ 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:06.621 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.621 [2024-11-20 10:41:55.756110] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:06.622 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:06.622 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.622 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.622 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.622 00:05:06.622 real 0m0.194s 00:05:06.622 user 0m0.100s 00:05:06.622 sys 0m0.092s 00:05:06.622 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.622 ************************************ 00:05:06.622 END TEST skip_rpc_with_delay 00:05:06.622 ************************************ 00:05:06.622 10:41:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:06.881 10:41:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:06.881 10:41:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:06.881 10:41:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:06.881 10:41:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.881 10:41:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.881 10:41:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.881 ************************************ 00:05:06.881 START TEST exit_on_failed_rpc_init 00:05:06.881 ************************************ 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58191 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58191 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58191 ']' 00:05:06.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.881 10:41:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.881 [2024-11-20 10:41:56.015455] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:06.881 [2024-11-20 10:41:56.015590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58191 ] 00:05:07.140 [2024-11-20 10:41:56.200901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.140 [2024-11-20 10:41:56.313080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:08.080 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.080 [2024-11-20 10:41:57.277020] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:08.080 [2024-11-20 10:41:57.277312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58209 ] 00:05:08.340 [2024-11-20 10:41:57.459177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.599 [2024-11-20 10:41:57.612380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.599 [2024-11-20 10:41:57.612784] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:08.599 [2024-11-20 10:41:57.612813] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:08.599 [2024-11-20 10:41:57.612838] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58191 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58191 ']' 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58191 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58191 00:05:08.858 killing process with pid 58191 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58191' 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58191 00:05:08.858 10:41:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58191 00:05:11.395 ************************************ 00:05:11.395 END TEST exit_on_failed_rpc_init 00:05:11.395 ************************************ 00:05:11.395 00:05:11.395 real 0m4.359s 00:05:11.395 user 0m4.660s 00:05:11.395 sys 0m0.667s 00:05:11.395 10:42:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.395 10:42:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.395 10:42:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.395 00:05:11.395 real 0m23.653s 00:05:11.395 user 0m22.464s 00:05:11.395 sys 0m2.384s 00:05:11.395 10:42:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.395 10:42:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.395 ************************************ 00:05:11.395 END TEST skip_rpc 00:05:11.395 ************************************ 00:05:11.395 10:42:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:11.395 10:42:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.395 10:42:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.395 10:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:11.395 ************************************ 00:05:11.395 START TEST rpc_client 00:05:11.395 ************************************ 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:11.395 * Looking for test storage... 00:05:11.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.395 10:42:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.395 --rc genhtml_branch_coverage=1 00:05:11.395 --rc genhtml_function_coverage=1 00:05:11.395 --rc genhtml_legend=1 00:05:11.395 --rc geninfo_all_blocks=1 00:05:11.395 --rc geninfo_unexecuted_blocks=1 00:05:11.395 00:05:11.395 ' 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.395 --rc genhtml_branch_coverage=1 00:05:11.395 --rc genhtml_function_coverage=1 00:05:11.395 --rc genhtml_legend=1 00:05:11.395 --rc geninfo_all_blocks=1 00:05:11.395 --rc geninfo_unexecuted_blocks=1 00:05:11.395 00:05:11.395 ' 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.395 --rc genhtml_branch_coverage=1 00:05:11.395 --rc genhtml_function_coverage=1 00:05:11.395 --rc genhtml_legend=1 00:05:11.395 --rc geninfo_all_blocks=1 00:05:11.395 --rc geninfo_unexecuted_blocks=1 00:05:11.395 00:05:11.395 ' 00:05:11.395 10:42:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.395 --rc genhtml_branch_coverage=1 00:05:11.395 --rc genhtml_function_coverage=1 00:05:11.395 --rc genhtml_legend=1 00:05:11.395 --rc geninfo_all_blocks=1 00:05:11.395 --rc geninfo_unexecuted_blocks=1 00:05:11.395 00:05:11.395 ' 00:05:11.396 10:42:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:11.664 OK 00:05:11.664 10:42:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:11.664 00:05:11.664 real 0m0.311s 00:05:11.664 user 0m0.175s 00:05:11.664 sys 0m0.151s 00:05:11.664 10:42:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.664 10:42:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:11.664 ************************************ 00:05:11.664 END TEST rpc_client 00:05:11.664 ************************************ 00:05:11.664 10:42:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:11.664 10:42:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.664 10:42:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.664 10:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:11.664 ************************************ 00:05:11.664 START TEST json_config 00:05:11.664 ************************************ 00:05:11.664 10:42:00 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:11.664 10:42:00 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.664 10:42:00 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.664 10:42:00 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.945 10:42:00 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.945 10:42:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.945 10:42:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.945 10:42:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.945 10:42:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.945 10:42:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.945 10:42:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.945 10:42:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.945 10:42:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.945 10:42:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.945 10:42:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.945 10:42:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.945 10:42:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:11.945 10:42:00 json_config -- scripts/common.sh@345 -- # : 1 00:05:11.945 10:42:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.945 10:42:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.945 10:42:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:11.945 10:42:00 json_config -- scripts/common.sh@353 -- # local d=1 00:05:11.945 10:42:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.945 10:42:00 json_config -- scripts/common.sh@355 -- # echo 1 00:05:11.945 10:42:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.945 10:42:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:11.945 10:42:00 json_config -- scripts/common.sh@353 -- # local d=2 00:05:11.945 10:42:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.945 10:42:00 json_config -- scripts/common.sh@355 -- # echo 2 00:05:11.945 10:42:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.945 10:42:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.945 10:42:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.945 10:42:00 json_config -- scripts/common.sh@368 -- # return 0 00:05:11.945 10:42:00 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.945 10:42:00 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.945 --rc genhtml_branch_coverage=1 00:05:11.945 --rc genhtml_function_coverage=1 00:05:11.945 --rc genhtml_legend=1 00:05:11.945 --rc geninfo_all_blocks=1 00:05:11.945 --rc geninfo_unexecuted_blocks=1 00:05:11.945 00:05:11.945 ' 00:05:11.945 10:42:00 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.945 --rc genhtml_branch_coverage=1 00:05:11.945 --rc genhtml_function_coverage=1 00:05:11.945 --rc genhtml_legend=1 00:05:11.945 --rc geninfo_all_blocks=1 00:05:11.945 --rc geninfo_unexecuted_blocks=1 00:05:11.945 00:05:11.945 ' 00:05:11.945 10:42:00 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.945 --rc genhtml_branch_coverage=1 00:05:11.945 --rc genhtml_function_coverage=1 00:05:11.945 --rc genhtml_legend=1 00:05:11.945 --rc geninfo_all_blocks=1 00:05:11.945 --rc geninfo_unexecuted_blocks=1 00:05:11.945 00:05:11.945 ' 00:05:11.945 10:42:00 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.945 --rc genhtml_branch_coverage=1 00:05:11.945 --rc genhtml_function_coverage=1 00:05:11.945 --rc genhtml_legend=1 00:05:11.945 --rc geninfo_all_blocks=1 00:05:11.945 --rc geninfo_unexecuted_blocks=1 00:05:11.945 00:05:11.945 ' 00:05:11.945 10:42:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:024c4b49-b590-476c-8262-62dc32414747 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=024c4b49-b590-476c-8262-62dc32414747 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.945 10:42:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:11.945 10:42:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.945 10:42:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.945 10:42:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.945 10:42:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.945 10:42:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.945 10:42:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.945 10:42:00 json_config -- paths/export.sh@5 -- # export PATH 00:05:11.945 10:42:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@51 -- # : 0 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.945 10:42:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.945 10:42:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:11.945 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:11.945 10:42:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:11.945 10:42:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:11.945 10:42:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:11.945 10:42:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:11.945 WARNING: No tests are enabled so not running JSON configuration tests 00:05:11.945 10:42:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:11.945 10:42:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:11.945 10:42:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:11.945 10:42:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:11.946 10:42:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:11.946 10:42:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:11.946 ************************************ 00:05:11.946 END TEST json_config 00:05:11.946 ************************************ 00:05:11.946 00:05:11.946 real 0m0.235s 00:05:11.946 user 0m0.134s 00:05:11.946 sys 0m0.101s 00:05:11.946 10:42:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.946 10:42:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.946 10:42:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:11.946 10:42:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.946 10:42:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.946 10:42:01 -- common/autotest_common.sh@10 -- # set +x 00:05:11.946 ************************************ 00:05:11.946 START TEST json_config_extra_key 00:05:11.946 ************************************ 00:05:11.946 10:42:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:11.946 10:42:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.946 10:42:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.946 10:42:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.205 10:42:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.205 10:42:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:12.205 10:42:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.205 10:42:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.205 --rc genhtml_branch_coverage=1 00:05:12.205 --rc genhtml_function_coverage=1 00:05:12.205 --rc genhtml_legend=1 00:05:12.205 --rc geninfo_all_blocks=1 00:05:12.205 --rc geninfo_unexecuted_blocks=1 00:05:12.205 00:05:12.205 ' 00:05:12.205 10:42:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.205 --rc genhtml_branch_coverage=1 00:05:12.205 --rc genhtml_function_coverage=1 00:05:12.205 --rc genhtml_legend=1 00:05:12.205 --rc geninfo_all_blocks=1 00:05:12.205 --rc geninfo_unexecuted_blocks=1 00:05:12.205 00:05:12.205 ' 00:05:12.205 10:42:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.205 --rc genhtml_branch_coverage=1 00:05:12.205 --rc genhtml_function_coverage=1 00:05:12.205 --rc genhtml_legend=1 00:05:12.205 --rc geninfo_all_blocks=1 00:05:12.205 --rc geninfo_unexecuted_blocks=1 00:05:12.205 00:05:12.205 ' 00:05:12.205 10:42:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.205 --rc genhtml_branch_coverage=1 00:05:12.205 --rc genhtml_function_coverage=1 00:05:12.205 --rc genhtml_legend=1 00:05:12.205 --rc geninfo_all_blocks=1 00:05:12.205 --rc geninfo_unexecuted_blocks=1 00:05:12.205 00:05:12.205 ' 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:024c4b49-b590-476c-8262-62dc32414747 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=024c4b49-b590-476c-8262-62dc32414747 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.206 10:42:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.206 10:42:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.206 10:42:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.206 10:42:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.206 10:42:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.206 10:42:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.206 10:42:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.206 10:42:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:12.206 10:42:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.206 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.206 10:42:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:12.206 INFO: launching applications... 00:05:12.206 10:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.206 Waiting for target to run... 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58419 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58419 /var/tmp/spdk_tgt.sock 00:05:12.206 10:42:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58419 ']' 00:05:12.206 10:42:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.206 10:42:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.206 10:42:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:12.206 10:42:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.206 10:42:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.206 10:42:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.206 [2024-11-20 10:42:01.414275] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:12.206 [2024-11-20 10:42:01.414614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58419 ] 00:05:12.774 [2024-11-20 10:42:01.811798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.774 [2024-11-20 10:42:01.915421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.711 10:42:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.711 00:05:13.711 INFO: shutting down applications... 00:05:13.711 10:42:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:13.711 10:42:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:13.711 10:42:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58419 ]] 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58419 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58419 00:05:13.711 10:42:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.970 10:42:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.970 10:42:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.970 10:42:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58419 00:05:13.970 10:42:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.539 10:42:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.539 10:42:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.539 10:42:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58419 00:05:14.539 10:42:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.107 10:42:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.107 10:42:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.107 10:42:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58419 00:05:15.107 10:42:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.675 10:42:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.675 10:42:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.675 10:42:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58419 00:05:15.675 10:42:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.934 10:42:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.934 10:42:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.934 10:42:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58419 00:05:15.934 10:42:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.503 10:42:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.503 10:42:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.503 10:42:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58419 00:05:16.503 SPDK target shutdown done 00:05:16.503 Success 00:05:16.503 10:42:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.503 10:42:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:16.503 10:42:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.503 10:42:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.503 10:42:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:16.503 00:05:16.503 real 0m4.584s 00:05:16.503 user 0m3.993s 00:05:16.503 sys 0m0.630s 00:05:16.503 ************************************ 00:05:16.503 END TEST json_config_extra_key 00:05:16.503 ************************************ 00:05:16.503 10:42:05 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.503 10:42:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.503 10:42:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.503 10:42:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.503 10:42:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.503 10:42:05 -- common/autotest_common.sh@10 -- # set +x 00:05:16.503 ************************************ 00:05:16.503 START TEST alias_rpc 00:05:16.503 ************************************ 00:05:16.503 10:42:05 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.797 * Looking for test storage... 00:05:16.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.797 10:42:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.797 --rc genhtml_branch_coverage=1 00:05:16.797 --rc genhtml_function_coverage=1 00:05:16.797 --rc genhtml_legend=1 00:05:16.797 --rc geninfo_all_blocks=1 00:05:16.797 --rc geninfo_unexecuted_blocks=1 00:05:16.797 00:05:16.797 ' 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.797 --rc genhtml_branch_coverage=1 00:05:16.797 --rc genhtml_function_coverage=1 00:05:16.797 --rc genhtml_legend=1 00:05:16.797 --rc geninfo_all_blocks=1 00:05:16.797 --rc geninfo_unexecuted_blocks=1 00:05:16.797 00:05:16.797 ' 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.797 --rc genhtml_branch_coverage=1 00:05:16.797 --rc genhtml_function_coverage=1 00:05:16.797 --rc genhtml_legend=1 00:05:16.797 --rc geninfo_all_blocks=1 00:05:16.797 --rc geninfo_unexecuted_blocks=1 00:05:16.797 00:05:16.797 ' 00:05:16.797 10:42:05 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:16.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.797 --rc genhtml_branch_coverage=1 00:05:16.797 --rc genhtml_function_coverage=1 00:05:16.797 --rc genhtml_legend=1 00:05:16.797 --rc geninfo_all_blocks=1 00:05:16.797 --rc geninfo_unexecuted_blocks=1 00:05:16.797 00:05:16.798 ' 00:05:16.798 10:42:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.798 10:42:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58536 00:05:16.798 10:42:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.798 10:42:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58536 00:05:16.798 10:42:05 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58536 ']' 00:05:16.798 10:42:05 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.798 10:42:05 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.798 10:42:05 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.798 10:42:05 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.798 10:42:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.057 [2024-11-20 10:42:06.100189] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:17.057 [2024-11-20 10:42:06.100539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58536 ] 00:05:17.057 [2024-11-20 10:42:06.288642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.317 [2024-11-20 10:42:06.401473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.256 10:42:07 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.256 10:42:07 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.256 10:42:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:18.515 10:42:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58536 00:05:18.515 10:42:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58536 ']' 00:05:18.515 10:42:07 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58536 00:05:18.515 10:42:07 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:18.515 10:42:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.515 10:42:07 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58536 00:05:18.515 killing process with pid 58536 00:05:18.516 10:42:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.516 10:42:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.516 10:42:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58536' 00:05:18.516 10:42:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 58536 00:05:18.516 10:42:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 58536 00:05:21.051 ************************************ 00:05:21.051 END TEST alias_rpc 00:05:21.051 ************************************ 00:05:21.051 00:05:21.051 real 0m4.221s 00:05:21.051 user 0m4.143s 00:05:21.051 sys 0m0.644s 00:05:21.051 10:42:09 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.051 10:42:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.051 10:42:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:21.051 10:42:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.051 10:42:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.051 10:42:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.051 10:42:10 -- common/autotest_common.sh@10 -- # set +x 00:05:21.051 ************************************ 00:05:21.051 START TEST spdkcli_tcp 00:05:21.051 ************************************ 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.051 * Looking for test storage... 00:05:21.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.051 10:42:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.051 --rc genhtml_branch_coverage=1 00:05:21.051 --rc genhtml_function_coverage=1 00:05:21.051 --rc genhtml_legend=1 00:05:21.051 --rc geninfo_all_blocks=1 00:05:21.051 --rc geninfo_unexecuted_blocks=1 00:05:21.051 00:05:21.051 ' 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.051 --rc genhtml_branch_coverage=1 00:05:21.051 --rc genhtml_function_coverage=1 00:05:21.051 --rc genhtml_legend=1 00:05:21.051 --rc geninfo_all_blocks=1 00:05:21.051 --rc geninfo_unexecuted_blocks=1 00:05:21.051 00:05:21.051 ' 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.051 --rc genhtml_branch_coverage=1 00:05:21.051 --rc genhtml_function_coverage=1 00:05:21.051 --rc genhtml_legend=1 00:05:21.051 --rc geninfo_all_blocks=1 00:05:21.051 --rc geninfo_unexecuted_blocks=1 00:05:21.051 00:05:21.051 ' 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.051 --rc genhtml_branch_coverage=1 00:05:21.051 --rc genhtml_function_coverage=1 00:05:21.051 --rc genhtml_legend=1 00:05:21.051 --rc geninfo_all_blocks=1 00:05:21.051 --rc geninfo_unexecuted_blocks=1 00:05:21.051 00:05:21.051 ' 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58643 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:21.051 10:42:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58643 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58643 ']' 00:05:21.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.051 10:42:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:21.310 [2024-11-20 10:42:10.373466] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:21.310 [2024-11-20 10:42:10.373602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58643 ] 00:05:21.310 [2024-11-20 10:42:10.555651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.567 [2024-11-20 10:42:10.667059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.567 [2024-11-20 10:42:10.667073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.501 10:42:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.501 10:42:11 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:22.501 10:42:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58660 00:05:22.501 10:42:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:22.501 10:42:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:22.501 [ 00:05:22.501 "bdev_malloc_delete", 00:05:22.501 "bdev_malloc_create", 00:05:22.501 "bdev_null_resize", 00:05:22.501 "bdev_null_delete", 00:05:22.501 "bdev_null_create", 00:05:22.501 "bdev_nvme_cuse_unregister", 00:05:22.501 "bdev_nvme_cuse_register", 00:05:22.501 "bdev_opal_new_user", 00:05:22.501 "bdev_opal_set_lock_state", 00:05:22.501 "bdev_opal_delete", 00:05:22.501 "bdev_opal_get_info", 00:05:22.501 "bdev_opal_create", 00:05:22.501 "bdev_nvme_opal_revert", 00:05:22.501 "bdev_nvme_opal_init", 00:05:22.501 "bdev_nvme_send_cmd", 00:05:22.501 "bdev_nvme_set_keys", 00:05:22.501 "bdev_nvme_get_path_iostat", 00:05:22.501 "bdev_nvme_get_mdns_discovery_info", 00:05:22.501 "bdev_nvme_stop_mdns_discovery", 00:05:22.501 "bdev_nvme_start_mdns_discovery", 00:05:22.501 "bdev_nvme_set_multipath_policy", 00:05:22.501 "bdev_nvme_set_preferred_path", 00:05:22.501 "bdev_nvme_get_io_paths", 00:05:22.501 "bdev_nvme_remove_error_injection", 00:05:22.501 "bdev_nvme_add_error_injection", 00:05:22.501 "bdev_nvme_get_discovery_info", 00:05:22.501 "bdev_nvme_stop_discovery", 00:05:22.501 "bdev_nvme_start_discovery", 00:05:22.501 "bdev_nvme_get_controller_health_info", 00:05:22.501 "bdev_nvme_disable_controller", 00:05:22.501 "bdev_nvme_enable_controller", 00:05:22.501 "bdev_nvme_reset_controller", 00:05:22.501 "bdev_nvme_get_transport_statistics", 00:05:22.501 "bdev_nvme_apply_firmware", 00:05:22.501 "bdev_nvme_detach_controller", 00:05:22.501 "bdev_nvme_get_controllers", 00:05:22.501 "bdev_nvme_attach_controller", 00:05:22.501 "bdev_nvme_set_hotplug", 00:05:22.501 "bdev_nvme_set_options", 00:05:22.501 "bdev_passthru_delete", 00:05:22.501 "bdev_passthru_create", 00:05:22.501 "bdev_lvol_set_parent_bdev", 00:05:22.501 "bdev_lvol_set_parent", 00:05:22.501 "bdev_lvol_check_shallow_copy", 00:05:22.501 "bdev_lvol_start_shallow_copy", 00:05:22.501 "bdev_lvol_grow_lvstore", 00:05:22.501 "bdev_lvol_get_lvols", 00:05:22.501 "bdev_lvol_get_lvstores", 00:05:22.501 "bdev_lvol_delete", 00:05:22.501 "bdev_lvol_set_read_only", 00:05:22.501 "bdev_lvol_resize", 00:05:22.501 "bdev_lvol_decouple_parent", 00:05:22.501 "bdev_lvol_inflate", 00:05:22.501 "bdev_lvol_rename", 00:05:22.501 "bdev_lvol_clone_bdev", 00:05:22.501 "bdev_lvol_clone", 00:05:22.501 "bdev_lvol_snapshot", 00:05:22.501 "bdev_lvol_create", 00:05:22.501 "bdev_lvol_delete_lvstore", 00:05:22.501 "bdev_lvol_rename_lvstore", 00:05:22.501 "bdev_lvol_create_lvstore", 00:05:22.501 "bdev_raid_set_options", 00:05:22.501 "bdev_raid_remove_base_bdev", 00:05:22.501 "bdev_raid_add_base_bdev", 00:05:22.501 "bdev_raid_delete", 00:05:22.501 "bdev_raid_create", 00:05:22.501 "bdev_raid_get_bdevs", 00:05:22.501 "bdev_error_inject_error", 00:05:22.501 "bdev_error_delete", 00:05:22.501 "bdev_error_create", 00:05:22.501 "bdev_split_delete", 00:05:22.501 "bdev_split_create", 00:05:22.501 "bdev_delay_delete", 00:05:22.501 "bdev_delay_create", 00:05:22.501 "bdev_delay_update_latency", 00:05:22.502 "bdev_zone_block_delete", 00:05:22.502 "bdev_zone_block_create", 00:05:22.502 "blobfs_create", 00:05:22.502 "blobfs_detect", 00:05:22.502 "blobfs_set_cache_size", 00:05:22.502 "bdev_xnvme_delete", 00:05:22.502 "bdev_xnvme_create", 00:05:22.502 "bdev_aio_delete", 00:05:22.502 "bdev_aio_rescan", 00:05:22.502 "bdev_aio_create", 00:05:22.502 "bdev_ftl_set_property", 00:05:22.502 "bdev_ftl_get_properties", 00:05:22.502 "bdev_ftl_get_stats", 00:05:22.502 "bdev_ftl_unmap", 00:05:22.502 "bdev_ftl_unload", 00:05:22.502 "bdev_ftl_delete", 00:05:22.502 "bdev_ftl_load", 00:05:22.502 "bdev_ftl_create", 00:05:22.502 "bdev_virtio_attach_controller", 00:05:22.502 "bdev_virtio_scsi_get_devices", 00:05:22.502 "bdev_virtio_detach_controller", 00:05:22.502 "bdev_virtio_blk_set_hotplug", 00:05:22.502 "bdev_iscsi_delete", 00:05:22.502 "bdev_iscsi_create", 00:05:22.502 "bdev_iscsi_set_options", 00:05:22.502 "accel_error_inject_error", 00:05:22.502 "ioat_scan_accel_module", 00:05:22.502 "dsa_scan_accel_module", 00:05:22.502 "iaa_scan_accel_module", 00:05:22.502 "keyring_file_remove_key", 00:05:22.502 "keyring_file_add_key", 00:05:22.502 "keyring_linux_set_options", 00:05:22.502 "fsdev_aio_delete", 00:05:22.502 "fsdev_aio_create", 00:05:22.502 "iscsi_get_histogram", 00:05:22.502 "iscsi_enable_histogram", 00:05:22.502 "iscsi_set_options", 00:05:22.502 "iscsi_get_auth_groups", 00:05:22.502 "iscsi_auth_group_remove_secret", 00:05:22.502 "iscsi_auth_group_add_secret", 00:05:22.502 "iscsi_delete_auth_group", 00:05:22.502 "iscsi_create_auth_group", 00:05:22.502 "iscsi_set_discovery_auth", 00:05:22.502 "iscsi_get_options", 00:05:22.502 "iscsi_target_node_request_logout", 00:05:22.502 "iscsi_target_node_set_redirect", 00:05:22.502 "iscsi_target_node_set_auth", 00:05:22.502 "iscsi_target_node_add_lun", 00:05:22.502 "iscsi_get_stats", 00:05:22.502 "iscsi_get_connections", 00:05:22.502 "iscsi_portal_group_set_auth", 00:05:22.502 "iscsi_start_portal_group", 00:05:22.502 "iscsi_delete_portal_group", 00:05:22.502 "iscsi_create_portal_group", 00:05:22.502 "iscsi_get_portal_groups", 00:05:22.502 "iscsi_delete_target_node", 00:05:22.502 "iscsi_target_node_remove_pg_ig_maps", 00:05:22.502 "iscsi_target_node_add_pg_ig_maps", 00:05:22.502 "iscsi_create_target_node", 00:05:22.502 "iscsi_get_target_nodes", 00:05:22.502 "iscsi_delete_initiator_group", 00:05:22.502 "iscsi_initiator_group_remove_initiators", 00:05:22.502 "iscsi_initiator_group_add_initiators", 00:05:22.502 "iscsi_create_initiator_group", 00:05:22.502 "iscsi_get_initiator_groups", 00:05:22.502 "nvmf_set_crdt", 00:05:22.502 "nvmf_set_config", 00:05:22.502 "nvmf_set_max_subsystems", 00:05:22.502 "nvmf_stop_mdns_prr", 00:05:22.502 "nvmf_publish_mdns_prr", 00:05:22.502 "nvmf_subsystem_get_listeners", 00:05:22.502 "nvmf_subsystem_get_qpairs", 00:05:22.502 "nvmf_subsystem_get_controllers", 00:05:22.502 "nvmf_get_stats", 00:05:22.502 "nvmf_get_transports", 00:05:22.502 "nvmf_create_transport", 00:05:22.502 "nvmf_get_targets", 00:05:22.502 "nvmf_delete_target", 00:05:22.502 "nvmf_create_target", 00:05:22.502 "nvmf_subsystem_allow_any_host", 00:05:22.502 "nvmf_subsystem_set_keys", 00:05:22.502 "nvmf_subsystem_remove_host", 00:05:22.502 "nvmf_subsystem_add_host", 00:05:22.502 "nvmf_ns_remove_host", 00:05:22.502 "nvmf_ns_add_host", 00:05:22.502 "nvmf_subsystem_remove_ns", 00:05:22.502 "nvmf_subsystem_set_ns_ana_group", 00:05:22.502 "nvmf_subsystem_add_ns", 00:05:22.502 "nvmf_subsystem_listener_set_ana_state", 00:05:22.502 "nvmf_discovery_get_referrals", 00:05:22.502 "nvmf_discovery_remove_referral", 00:05:22.502 "nvmf_discovery_add_referral", 00:05:22.502 "nvmf_subsystem_remove_listener", 00:05:22.502 "nvmf_subsystem_add_listener", 00:05:22.502 "nvmf_delete_subsystem", 00:05:22.502 "nvmf_create_subsystem", 00:05:22.502 "nvmf_get_subsystems", 00:05:22.502 "env_dpdk_get_mem_stats", 00:05:22.502 "nbd_get_disks", 00:05:22.502 "nbd_stop_disk", 00:05:22.502 "nbd_start_disk", 00:05:22.502 "ublk_recover_disk", 00:05:22.502 "ublk_get_disks", 00:05:22.502 "ublk_stop_disk", 00:05:22.502 "ublk_start_disk", 00:05:22.502 "ublk_destroy_target", 00:05:22.502 "ublk_create_target", 00:05:22.502 "virtio_blk_create_transport", 00:05:22.502 "virtio_blk_get_transports", 00:05:22.502 "vhost_controller_set_coalescing", 00:05:22.502 "vhost_get_controllers", 00:05:22.502 "vhost_delete_controller", 00:05:22.502 "vhost_create_blk_controller", 00:05:22.502 "vhost_scsi_controller_remove_target", 00:05:22.502 "vhost_scsi_controller_add_target", 00:05:22.502 "vhost_start_scsi_controller", 00:05:22.502 "vhost_create_scsi_controller", 00:05:22.502 "thread_set_cpumask", 00:05:22.502 "scheduler_set_options", 00:05:22.502 "framework_get_governor", 00:05:22.502 "framework_get_scheduler", 00:05:22.502 "framework_set_scheduler", 00:05:22.502 "framework_get_reactors", 00:05:22.502 "thread_get_io_channels", 00:05:22.502 "thread_get_pollers", 00:05:22.502 "thread_get_stats", 00:05:22.502 "framework_monitor_context_switch", 00:05:22.502 "spdk_kill_instance", 00:05:22.502 "log_enable_timestamps", 00:05:22.502 "log_get_flags", 00:05:22.502 "log_clear_flag", 00:05:22.502 "log_set_flag", 00:05:22.502 "log_get_level", 00:05:22.502 "log_set_level", 00:05:22.502 "log_get_print_level", 00:05:22.502 "log_set_print_level", 00:05:22.502 "framework_enable_cpumask_locks", 00:05:22.502 "framework_disable_cpumask_locks", 00:05:22.502 "framework_wait_init", 00:05:22.502 "framework_start_init", 00:05:22.502 "scsi_get_devices", 00:05:22.502 "bdev_get_histogram", 00:05:22.502 "bdev_enable_histogram", 00:05:22.502 "bdev_set_qos_limit", 00:05:22.502 "bdev_set_qd_sampling_period", 00:05:22.502 "bdev_get_bdevs", 00:05:22.502 "bdev_reset_iostat", 00:05:22.502 "bdev_get_iostat", 00:05:22.502 "bdev_examine", 00:05:22.502 "bdev_wait_for_examine", 00:05:22.502 "bdev_set_options", 00:05:22.502 "accel_get_stats", 00:05:22.502 "accel_set_options", 00:05:22.502 "accel_set_driver", 00:05:22.502 "accel_crypto_key_destroy", 00:05:22.502 "accel_crypto_keys_get", 00:05:22.502 "accel_crypto_key_create", 00:05:22.502 "accel_assign_opc", 00:05:22.502 "accel_get_module_info", 00:05:22.502 "accel_get_opc_assignments", 00:05:22.502 "vmd_rescan", 00:05:22.502 "vmd_remove_device", 00:05:22.502 "vmd_enable", 00:05:22.502 "sock_get_default_impl", 00:05:22.502 "sock_set_default_impl", 00:05:22.502 "sock_impl_set_options", 00:05:22.502 "sock_impl_get_options", 00:05:22.502 "iobuf_get_stats", 00:05:22.502 "iobuf_set_options", 00:05:22.502 "keyring_get_keys", 00:05:22.502 "framework_get_pci_devices", 00:05:22.502 "framework_get_config", 00:05:22.502 "framework_get_subsystems", 00:05:22.502 "fsdev_set_opts", 00:05:22.502 "fsdev_get_opts", 00:05:22.502 "trace_get_info", 00:05:22.502 "trace_get_tpoint_group_mask", 00:05:22.502 "trace_disable_tpoint_group", 00:05:22.502 "trace_enable_tpoint_group", 00:05:22.502 "trace_clear_tpoint_mask", 00:05:22.502 "trace_set_tpoint_mask", 00:05:22.502 "notify_get_notifications", 00:05:22.502 "notify_get_types", 00:05:22.502 "spdk_get_version", 00:05:22.502 "rpc_get_methods" 00:05:22.502 ] 00:05:22.502 10:42:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:22.502 10:42:11 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:22.502 10:42:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.769 10:42:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:22.769 10:42:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58643 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58643 ']' 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58643 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58643 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.769 killing process with pid 58643 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58643' 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58643 00:05:22.769 10:42:11 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58643 00:05:25.307 00:05:25.307 real 0m4.178s 00:05:25.307 user 0m7.377s 00:05:25.307 sys 0m0.686s 00:05:25.307 10:42:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.307 10:42:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.307 ************************************ 00:05:25.307 END TEST spdkcli_tcp 00:05:25.307 ************************************ 00:05:25.307 10:42:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.307 10:42:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.307 10:42:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.307 10:42:14 -- common/autotest_common.sh@10 -- # set +x 00:05:25.307 ************************************ 00:05:25.307 START TEST dpdk_mem_utility 00:05:25.307 ************************************ 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.307 * Looking for test storage... 00:05:25.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.307 10:42:14 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.307 --rc genhtml_branch_coverage=1 00:05:25.307 --rc genhtml_function_coverage=1 00:05:25.307 --rc genhtml_legend=1 00:05:25.307 --rc geninfo_all_blocks=1 00:05:25.307 --rc geninfo_unexecuted_blocks=1 00:05:25.307 00:05:25.307 ' 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.307 --rc genhtml_branch_coverage=1 00:05:25.307 --rc genhtml_function_coverage=1 00:05:25.307 --rc genhtml_legend=1 00:05:25.307 --rc geninfo_all_blocks=1 00:05:25.307 --rc geninfo_unexecuted_blocks=1 00:05:25.307 00:05:25.307 ' 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.307 --rc genhtml_branch_coverage=1 00:05:25.307 --rc genhtml_function_coverage=1 00:05:25.307 --rc genhtml_legend=1 00:05:25.307 --rc geninfo_all_blocks=1 00:05:25.307 --rc geninfo_unexecuted_blocks=1 00:05:25.307 00:05:25.307 ' 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.307 --rc genhtml_branch_coverage=1 00:05:25.307 --rc genhtml_function_coverage=1 00:05:25.307 --rc genhtml_legend=1 00:05:25.307 --rc geninfo_all_blocks=1 00:05:25.307 --rc geninfo_unexecuted_blocks=1 00:05:25.307 00:05:25.307 ' 00:05:25.307 10:42:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.307 10:42:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58765 00:05:25.307 10:42:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.307 10:42:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58765 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58765 ']' 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.307 10:42:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.567 [2024-11-20 10:42:14.617620] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:25.567 [2024-11-20 10:42:14.617763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58765 ] 00:05:25.567 [2024-11-20 10:42:14.800149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.826 [2024-11-20 10:42:14.915294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.764 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.764 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:26.764 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:26.764 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:26.764 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.764 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.764 { 00:05:26.764 "filename": "/tmp/spdk_mem_dump.txt" 00:05:26.764 } 00:05:26.764 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.764 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:26.764 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:26.764 1 heaps totaling size 816.000000 MiB 00:05:26.764 size: 816.000000 MiB heap id: 0 00:05:26.764 end heaps---------- 00:05:26.764 9 mempools totaling size 595.772034 MiB 00:05:26.764 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:26.764 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:26.764 size: 92.545471 MiB name: bdev_io_58765 00:05:26.764 size: 50.003479 MiB name: msgpool_58765 00:05:26.764 size: 36.509338 MiB name: fsdev_io_58765 00:05:26.764 size: 21.763794 MiB name: PDU_Pool 00:05:26.764 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:26.764 size: 4.133484 MiB name: evtpool_58765 00:05:26.764 size: 0.026123 MiB name: Session_Pool 00:05:26.764 end mempools------- 00:05:26.764 6 memzones totaling size 4.142822 MiB 00:05:26.764 size: 1.000366 MiB name: RG_ring_0_58765 00:05:26.764 size: 1.000366 MiB name: RG_ring_1_58765 00:05:26.764 size: 1.000366 MiB name: RG_ring_4_58765 00:05:26.764 size: 1.000366 MiB name: RG_ring_5_58765 00:05:26.764 size: 0.125366 MiB name: RG_ring_2_58765 00:05:26.764 size: 0.015991 MiB name: RG_ring_3_58765 00:05:26.764 end memzones------- 00:05:26.764 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:26.764 heap id: 0 total size: 816.000000 MiB number of busy elements: 315 number of free elements: 18 00:05:26.764 list of free elements. size: 16.791382 MiB 00:05:26.764 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:26.764 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:26.764 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:26.764 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:26.764 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:26.764 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:26.764 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:26.764 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:26.764 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:26.764 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:26.764 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:26.764 element at address: 0x20001ac00000 with size: 0.561951 MiB 00:05:26.764 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:26.764 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:26.764 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:26.764 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:26.764 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:26.764 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:26.764 list of standard malloc elements. size: 199.287720 MiB 00:05:26.764 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:26.764 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:26.764 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:26.764 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:26.764 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:26.764 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:26.764 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:26.764 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:26.764 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:26.764 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:26.764 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:26.764 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:26.764 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:26.765 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:26.766 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:26.766 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:26.766 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:26.766 list of memzone associated elements. size: 599.920898 MiB 00:05:26.766 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:26.766 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:26.766 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:26.766 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:26.766 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:26.766 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58765_0 00:05:26.766 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:26.766 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58765_0 00:05:26.766 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:26.766 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58765_0 00:05:26.766 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:26.766 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:26.767 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:26.767 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:26.767 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:26.767 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58765_0 00:05:26.767 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:26.767 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58765 00:05:26.767 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:26.767 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58765 00:05:26.767 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:26.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:26.767 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:26.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:26.767 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:26.767 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:26.767 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:26.767 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:26.767 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:26.767 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58765 00:05:26.767 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:26.767 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58765 00:05:26.767 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:26.767 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58765 00:05:26.767 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:26.767 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58765 00:05:26.767 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:26.767 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58765 00:05:26.767 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:26.767 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58765 00:05:26.767 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:26.767 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:26.767 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:26.767 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:26.767 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:26.767 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:26.767 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:26.767 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58765 00:05:26.767 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:26.767 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58765 00:05:26.767 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:26.767 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:26.767 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:26.767 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:26.767 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:26.767 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58765 00:05:26.767 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:26.767 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:26.767 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:26.767 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58765 00:05:26.767 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:26.767 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58765 00:05:26.767 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:26.767 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58765 00:05:26.767 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:26.767 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:26.767 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:26.767 10:42:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58765 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58765 ']' 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58765 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58765 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.767 killing process with pid 58765 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58765' 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58765 00:05:26.767 10:42:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58765 00:05:29.302 00:05:29.302 real 0m4.024s 00:05:29.302 user 0m3.906s 00:05:29.302 sys 0m0.579s 00:05:29.302 ************************************ 00:05:29.302 END TEST dpdk_mem_utility 00:05:29.302 ************************************ 00:05:29.302 10:42:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.302 10:42:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.302 10:42:18 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.302 10:42:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.302 10:42:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.302 10:42:18 -- common/autotest_common.sh@10 -- # set +x 00:05:29.302 ************************************ 00:05:29.302 START TEST event 00:05:29.302 ************************************ 00:05:29.302 10:42:18 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:29.302 * Looking for test storage... 00:05:29.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:29.302 10:42:18 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:29.302 10:42:18 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:29.302 10:42:18 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.561 10:42:18 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.561 10:42:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.561 10:42:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.561 10:42:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.561 10:42:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.561 10:42:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.561 10:42:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.561 10:42:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.561 10:42:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.561 10:42:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.561 10:42:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.561 10:42:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.561 10:42:18 event -- scripts/common.sh@344 -- # case "$op" in 00:05:29.561 10:42:18 event -- scripts/common.sh@345 -- # : 1 00:05:29.561 10:42:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.561 10:42:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.561 10:42:18 event -- scripts/common.sh@365 -- # decimal 1 00:05:29.561 10:42:18 event -- scripts/common.sh@353 -- # local d=1 00:05:29.561 10:42:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.561 10:42:18 event -- scripts/common.sh@355 -- # echo 1 00:05:29.561 10:42:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.561 10:42:18 event -- scripts/common.sh@366 -- # decimal 2 00:05:29.561 10:42:18 event -- scripts/common.sh@353 -- # local d=2 00:05:29.561 10:42:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.561 10:42:18 event -- scripts/common.sh@355 -- # echo 2 00:05:29.561 10:42:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.561 10:42:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.561 10:42:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.561 10:42:18 event -- scripts/common.sh@368 -- # return 0 00:05:29.561 10:42:18 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.561 10:42:18 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.561 --rc genhtml_branch_coverage=1 00:05:29.561 --rc genhtml_function_coverage=1 00:05:29.561 --rc genhtml_legend=1 00:05:29.561 --rc geninfo_all_blocks=1 00:05:29.561 --rc geninfo_unexecuted_blocks=1 00:05:29.561 00:05:29.561 ' 00:05:29.561 10:42:18 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.561 --rc genhtml_branch_coverage=1 00:05:29.561 --rc genhtml_function_coverage=1 00:05:29.561 --rc genhtml_legend=1 00:05:29.561 --rc geninfo_all_blocks=1 00:05:29.561 --rc geninfo_unexecuted_blocks=1 00:05:29.561 00:05:29.561 ' 00:05:29.561 10:42:18 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.561 --rc genhtml_branch_coverage=1 00:05:29.561 --rc genhtml_function_coverage=1 00:05:29.561 --rc genhtml_legend=1 00:05:29.561 --rc geninfo_all_blocks=1 00:05:29.561 --rc geninfo_unexecuted_blocks=1 00:05:29.561 00:05:29.561 ' 00:05:29.561 10:42:18 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.561 --rc genhtml_branch_coverage=1 00:05:29.561 --rc genhtml_function_coverage=1 00:05:29.561 --rc genhtml_legend=1 00:05:29.561 --rc geninfo_all_blocks=1 00:05:29.561 --rc geninfo_unexecuted_blocks=1 00:05:29.561 00:05:29.561 ' 00:05:29.561 10:42:18 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:29.561 10:42:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:29.561 10:42:18 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.561 10:42:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:29.561 10:42:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.561 10:42:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.561 ************************************ 00:05:29.561 START TEST event_perf 00:05:29.561 ************************************ 00:05:29.561 10:42:18 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.561 Running I/O for 1 seconds...[2024-11-20 10:42:18.649651] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:29.561 [2024-11-20 10:42:18.649774] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58873 ] 00:05:29.820 [2024-11-20 10:42:18.834053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.820 [2024-11-20 10:42:18.952527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.820 [2024-11-20 10:42:18.952736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.820 [2024-11-20 10:42:18.952873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.820 Running I/O for 1 seconds...[2024-11-20 10:42:18.952904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.224 00:05:31.224 lcore 0: 109568 00:05:31.224 lcore 1: 109569 00:05:31.224 lcore 2: 109572 00:05:31.224 lcore 3: 109569 00:05:31.224 done. 00:05:31.224 00:05:31.224 real 0m1.593s 00:05:31.224 user 0m4.347s 00:05:31.224 sys 0m0.123s 00:05:31.224 10:42:20 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.224 10:42:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.224 ************************************ 00:05:31.224 END TEST event_perf 00:05:31.224 ************************************ 00:05:31.224 10:42:20 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.224 10:42:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:31.224 10:42:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.224 10:42:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.224 ************************************ 00:05:31.224 START TEST event_reactor 00:05:31.224 ************************************ 00:05:31.224 10:42:20 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:31.224 [2024-11-20 10:42:20.321076] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:31.224 [2024-11-20 10:42:20.321202] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58913 ] 00:05:31.482 [2024-11-20 10:42:20.488299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.483 [2024-11-20 10:42:20.596683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.894 test_start 00:05:32.894 oneshot 00:05:32.894 tick 100 00:05:32.894 tick 100 00:05:32.894 tick 250 00:05:32.894 tick 100 00:05:32.894 tick 100 00:05:32.894 tick 250 00:05:32.894 tick 500 00:05:32.894 tick 100 00:05:32.894 tick 100 00:05:32.894 tick 100 00:05:32.894 tick 250 00:05:32.894 tick 100 00:05:32.894 tick 100 00:05:32.894 test_end 00:05:32.894 ************************************ 00:05:32.894 END TEST event_reactor 00:05:32.894 ************************************ 00:05:32.894 00:05:32.894 real 0m1.552s 00:05:32.894 user 0m1.338s 00:05:32.894 sys 0m0.105s 00:05:32.894 10:42:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.894 10:42:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:32.894 10:42:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.894 10:42:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.894 10:42:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.894 10:42:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.894 ************************************ 00:05:32.894 START TEST event_reactor_perf 00:05:32.894 ************************************ 00:05:32.894 10:42:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:32.894 [2024-11-20 10:42:21.951213] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:32.894 [2024-11-20 10:42:21.951334] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58949 ] 00:05:32.894 [2024-11-20 10:42:22.133619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.152 [2024-11-20 10:42:22.245754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.530 test_start 00:05:34.530 test_end 00:05:34.530 Performance: 391158 events per second 00:05:34.530 00:05:34.530 real 0m1.564s 00:05:34.530 user 0m1.356s 00:05:34.530 sys 0m0.100s 00:05:34.530 10:42:23 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.530 ************************************ 00:05:34.530 END TEST event_reactor_perf 00:05:34.530 ************************************ 00:05:34.530 10:42:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.530 10:42:23 event -- event/event.sh@49 -- # uname -s 00:05:34.530 10:42:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.530 10:42:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.530 10:42:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.530 10:42:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.530 10:42:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.530 ************************************ 00:05:34.530 START TEST event_scheduler 00:05:34.530 ************************************ 00:05:34.530 10:42:23 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:34.530 * Looking for test storage... 00:05:34.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:34.530 10:42:23 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.530 10:42:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.530 10:42:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.530 10:42:23 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.530 10:42:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:34.789 10:42:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.789 10:42:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.789 10:42:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.789 10:42:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.789 --rc genhtml_branch_coverage=1 00:05:34.789 --rc genhtml_function_coverage=1 00:05:34.789 --rc genhtml_legend=1 00:05:34.789 --rc geninfo_all_blocks=1 00:05:34.789 --rc geninfo_unexecuted_blocks=1 00:05:34.789 00:05:34.789 ' 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.789 --rc genhtml_branch_coverage=1 00:05:34.789 --rc genhtml_function_coverage=1 00:05:34.789 --rc genhtml_legend=1 00:05:34.789 --rc geninfo_all_blocks=1 00:05:34.789 --rc geninfo_unexecuted_blocks=1 00:05:34.789 00:05:34.789 ' 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.789 --rc genhtml_branch_coverage=1 00:05:34.789 --rc genhtml_function_coverage=1 00:05:34.789 --rc genhtml_legend=1 00:05:34.789 --rc geninfo_all_blocks=1 00:05:34.789 --rc geninfo_unexecuted_blocks=1 00:05:34.789 00:05:34.789 ' 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.789 --rc genhtml_branch_coverage=1 00:05:34.789 --rc genhtml_function_coverage=1 00:05:34.789 --rc genhtml_legend=1 00:05:34.789 --rc geninfo_all_blocks=1 00:05:34.789 --rc geninfo_unexecuted_blocks=1 00:05:34.789 00:05:34.789 ' 00:05:34.789 10:42:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:34.789 10:42:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59021 00:05:34.789 10:42:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:34.789 10:42:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.789 10:42:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59021 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59021 ']' 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.789 10:42:23 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.790 10:42:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.790 [2024-11-20 10:42:23.881623] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:34.790 [2024-11-20 10:42:23.881951] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59021 ] 00:05:35.048 [2024-11-20 10:42:24.065116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.048 [2024-11-20 10:42:24.185503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.048 [2024-11-20 10:42:24.185884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.048 [2024-11-20 10:42:24.185841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.048 [2024-11-20 10:42:24.185712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.622 10:42:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.622 10:42:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:35.622 10:42:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.622 10:42:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.622 10:42:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.622 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.622 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.622 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.622 POWER: Cannot set governor of lcore 0 to performance 00:05:35.622 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.622 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.622 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:35.622 POWER: Cannot set governor of lcore 0 to userspace 00:05:35.622 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:35.622 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:35.622 POWER: Unable to set Power Management Environment for lcore 0 00:05:35.622 [2024-11-20 10:42:24.719204] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:35.622 [2024-11-20 10:42:24.719229] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:35.622 [2024-11-20 10:42:24.719241] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.622 [2024-11-20 10:42:24.719261] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.622 [2024-11-20 10:42:24.719272] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.622 [2024-11-20 10:42:24.719284] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.622 10:42:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.622 10:42:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.622 10:42:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.622 10:42:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.881 [2024-11-20 10:42:25.028682] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.881 10:42:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.881 10:42:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.881 10:42:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.881 10:42:25 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.881 10:42:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.881 ************************************ 00:05:35.881 START TEST scheduler_create_thread 00:05:35.881 ************************************ 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.881 2 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.881 3 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.881 4 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.881 5 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.881 6 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.881 7 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.881 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.219 8 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.219 9 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.219 10 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.219 10:42:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.595 10:42:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.595 10:42:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.595 10:42:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.596 10:42:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.596 10:42:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.164 10:42:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.164 10:42:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.164 10:42:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.164 10:42:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.101 10:42:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.101 10:42:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:39.101 10:42:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:39.101 10:42:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.101 10:42:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.037 ************************************ 00:05:40.037 END TEST scheduler_create_thread 00:05:40.037 ************************************ 00:05:40.037 10:42:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.037 00:05:40.037 real 0m3.883s 00:05:40.037 user 0m0.025s 00:05:40.037 sys 0m0.007s 00:05:40.037 10:42:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.037 10:42:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.037 10:42:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:40.037 10:42:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59021 00:05:40.037 10:42:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59021 ']' 00:05:40.037 10:42:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59021 00:05:40.037 10:42:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:40.037 10:42:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.037 10:42:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59021 00:05:40.037 killing process with pid 59021 00:05:40.037 10:42:29 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:40.037 10:42:29 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:40.037 10:42:29 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59021' 00:05:40.037 10:42:29 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59021 00:05:40.037 10:42:29 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59021 00:05:40.296 [2024-11-20 10:42:29.306481] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:41.233 00:05:41.233 real 0m6.911s 00:05:41.233 user 0m14.180s 00:05:41.233 sys 0m0.572s 00:05:41.233 ************************************ 00:05:41.233 END TEST event_scheduler 00:05:41.233 ************************************ 00:05:41.233 10:42:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.233 10:42:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.492 10:42:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:41.492 10:42:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:41.492 10:42:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.492 10:42:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.492 10:42:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.492 ************************************ 00:05:41.492 START TEST app_repeat 00:05:41.492 ************************************ 00:05:41.493 10:42:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:41.493 Process app_repeat pid: 59148 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59148 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59148' 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:41.493 spdk_app_start Round 0 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:41.493 10:42:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59148 /var/tmp/spdk-nbd.sock 00:05:41.493 10:42:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59148 ']' 00:05:41.493 10:42:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.493 10:42:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.493 10:42:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.493 10:42:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.493 10:42:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 [2024-11-20 10:42:30.604212] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:05:41.493 [2024-11-20 10:42:30.604324] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59148 ] 00:05:41.751 [2024-11-20 10:42:30.783755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.751 [2024-11-20 10:42:30.899846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.751 [2024-11-20 10:42:30.899883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.317 10:42:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.317 10:42:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:42.317 10:42:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.576 Malloc0 00:05:42.576 10:42:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.835 Malloc1 00:05:42.835 10:42:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.835 10:42:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.094 /dev/nbd0 00:05:43.094 10:42:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.094 10:42:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.094 1+0 records in 00:05:43.094 1+0 records out 00:05:43.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211745 s, 19.3 MB/s 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.094 10:42:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.094 10:42:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.094 10:42:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.094 10:42:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.353 /dev/nbd1 00:05:43.353 10:42:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.353 10:42:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.353 1+0 records in 00:05:43.353 1+0 records out 00:05:43.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228254 s, 17.9 MB/s 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.353 10:42:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.353 10:42:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.353 10:42:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.353 10:42:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.353 10:42:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.353 10:42:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.611 10:42:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.611 { 00:05:43.611 "nbd_device": "/dev/nbd0", 00:05:43.611 "bdev_name": "Malloc0" 00:05:43.611 }, 00:05:43.611 { 00:05:43.612 "nbd_device": "/dev/nbd1", 00:05:43.612 "bdev_name": "Malloc1" 00:05:43.612 } 00:05:43.612 ]' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.612 { 00:05:43.612 "nbd_device": "/dev/nbd0", 00:05:43.612 "bdev_name": "Malloc0" 00:05:43.612 }, 00:05:43.612 { 00:05:43.612 "nbd_device": "/dev/nbd1", 00:05:43.612 "bdev_name": "Malloc1" 00:05:43.612 } 00:05:43.612 ]' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.612 /dev/nbd1' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.612 /dev/nbd1' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.612 256+0 records in 00:05:43.612 256+0 records out 00:05:43.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119087 s, 88.1 MB/s 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.612 256+0 records in 00:05:43.612 256+0 records out 00:05:43.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029997 s, 35.0 MB/s 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.612 256+0 records in 00:05:43.612 256+0 records out 00:05:43.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0364954 s, 28.7 MB/s 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.612 10:42:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.870 10:42:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.130 10:42:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.389 10:42:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.389 10:42:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.006 10:42:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:46.385 [2024-11-20 10:42:35.201375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.385 [2024-11-20 10:42:35.312442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.385 [2024-11-20 10:42:35.312443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.385 [2024-11-20 10:42:35.508964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:46.385 [2024-11-20 10:42:35.509028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.291 10:42:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.291 spdk_app_start Round 1 00:05:48.291 10:42:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:48.291 10:42:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59148 /var/tmp/spdk-nbd.sock 00:05:48.291 10:42:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59148 ']' 00:05:48.291 10:42:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.291 10:42:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.291 10:42:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.291 10:42:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.291 10:42:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.291 10:42:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.291 10:42:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.291 10:42:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.291 Malloc0 00:05:48.550 10:42:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.550 Malloc1 00:05:48.809 10:42:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.809 10:42:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.809 /dev/nbd0 00:05:48.809 10:42:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.809 10:42:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.809 10:42:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.809 1+0 records in 00:05:48.809 1+0 records out 00:05:48.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250264 s, 16.4 MB/s 00:05:49.068 10:42:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.068 10:42:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.068 10:42:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.068 10:42:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.068 10:42:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.068 10:42:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.068 10:42:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.069 10:42:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.069 /dev/nbd1 00:05:49.069 10:42:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.069 10:42:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.069 1+0 records in 00:05:49.069 1+0 records out 00:05:49.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399188 s, 10.3 MB/s 00:05:49.069 10:42:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.328 10:42:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.328 10:42:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.328 10:42:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.328 10:42:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.328 10:42:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.328 10:42:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.328 10:42:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.328 10:42:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.328 10:42:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.328 10:42:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.328 { 00:05:49.328 "nbd_device": "/dev/nbd0", 00:05:49.328 "bdev_name": "Malloc0" 00:05:49.328 }, 00:05:49.328 { 00:05:49.328 "nbd_device": "/dev/nbd1", 00:05:49.328 "bdev_name": "Malloc1" 00:05:49.328 } 00:05:49.328 ]' 00:05:49.328 10:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.328 { 00:05:49.328 "nbd_device": "/dev/nbd0", 00:05:49.328 "bdev_name": "Malloc0" 00:05:49.328 }, 00:05:49.328 { 00:05:49.328 "nbd_device": "/dev/nbd1", 00:05:49.328 "bdev_name": "Malloc1" 00:05:49.328 } 00:05:49.328 ]' 00:05:49.328 10:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.587 /dev/nbd1' 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.587 /dev/nbd1' 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.587 256+0 records in 00:05:49.587 256+0 records out 00:05:49.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010745 s, 97.6 MB/s 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.587 256+0 records in 00:05:49.587 256+0 records out 00:05:49.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293061 s, 35.8 MB/s 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.587 256+0 records in 00:05:49.587 256+0 records out 00:05:49.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318667 s, 32.9 MB/s 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.587 10:42:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.847 10:42:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.109 10:42:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.369 10:42:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.369 10:42:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.627 10:42:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.004 [2024-11-20 10:42:40.955316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.004 [2024-11-20 10:42:41.067050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.004 [2024-11-20 10:42:41.067078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.262 [2024-11-20 10:42:41.258181] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.262 [2024-11-20 10:42:41.258485] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.639 spdk_app_start Round 2 00:05:53.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.639 10:42:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.639 10:42:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.639 10:42:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59148 /var/tmp/spdk-nbd.sock 00:05:53.639 10:42:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59148 ']' 00:05:53.639 10:42:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.639 10:42:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.639 10:42:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.639 10:42:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.639 10:42:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.930 10:42:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.930 10:42:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.930 10:42:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.220 Malloc0 00:05:54.220 10:42:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.478 Malloc1 00:05:54.479 10:42:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.479 10:42:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.738 /dev/nbd0 00:05:54.738 10:42:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.738 10:42:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.738 1+0 records in 00:05:54.738 1+0 records out 00:05:54.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320168 s, 12.8 MB/s 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.738 10:42:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.738 10:42:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.738 10:42:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.738 10:42:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.997 /dev/nbd1 00:05:54.997 10:42:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.997 10:42:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.997 1+0 records in 00:05:54.997 1+0 records out 00:05:54.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388619 s, 10.5 MB/s 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.997 10:42:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.997 10:42:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.997 10:42:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.997 10:42:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.997 10:42:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.997 10:42:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:55.257 { 00:05:55.257 "nbd_device": "/dev/nbd0", 00:05:55.257 "bdev_name": "Malloc0" 00:05:55.257 }, 00:05:55.257 { 00:05:55.257 "nbd_device": "/dev/nbd1", 00:05:55.257 "bdev_name": "Malloc1" 00:05:55.257 } 00:05:55.257 ]' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:55.257 { 00:05:55.257 "nbd_device": "/dev/nbd0", 00:05:55.257 "bdev_name": "Malloc0" 00:05:55.257 }, 00:05:55.257 { 00:05:55.257 "nbd_device": "/dev/nbd1", 00:05:55.257 "bdev_name": "Malloc1" 00:05:55.257 } 00:05:55.257 ]' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:55.257 /dev/nbd1' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:55.257 /dev/nbd1' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.257 256+0 records in 00:05:55.257 256+0 records out 00:05:55.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00527416 s, 199 MB/s 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.257 256+0 records in 00:05:55.257 256+0 records out 00:05:55.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278122 s, 37.7 MB/s 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.257 256+0 records in 00:05:55.257 256+0 records out 00:05:55.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0361614 s, 29.0 MB/s 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.257 10:42:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.516 10:42:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.775 10:42:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.033 10:42:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.033 10:42:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.601 10:42:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.538 [2024-11-20 10:42:46.702469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.797 [2024-11-20 10:42:46.813232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.797 [2024-11-20 10:42:46.813233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.797 [2024-11-20 10:42:47.005073] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.797 [2024-11-20 10:42:47.005128] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.702 10:42:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59148 /var/tmp/spdk-nbd.sock 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59148 ']' 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.702 10:42:48 event.app_repeat -- event/event.sh@39 -- # killprocess 59148 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59148 ']' 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59148 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59148 00:05:59.702 killing process with pid 59148 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59148' 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59148 00:05:59.702 10:42:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59148 00:06:00.666 spdk_app_start is called in Round 0. 00:06:00.667 Shutdown signal received, stop current app iteration 00:06:00.667 Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization... 00:06:00.667 spdk_app_start is called in Round 1. 00:06:00.667 Shutdown signal received, stop current app iteration 00:06:00.667 Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization... 00:06:00.667 spdk_app_start is called in Round 2. 00:06:00.667 Shutdown signal received, stop current app iteration 00:06:00.667 Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 reinitialization... 00:06:00.667 spdk_app_start is called in Round 3. 00:06:00.667 Shutdown signal received, stop current app iteration 00:06:00.667 10:42:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.667 10:42:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:00.667 00:06:00.667 real 0m19.305s 00:06:00.667 user 0m41.153s 00:06:00.667 sys 0m2.965s 00:06:00.667 10:42:49 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.667 10:42:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.667 ************************************ 00:06:00.667 END TEST app_repeat 00:06:00.667 ************************************ 00:06:00.667 10:42:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.667 10:42:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:00.667 10:42:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.667 10:42:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.667 10:42:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.667 ************************************ 00:06:00.667 START TEST cpu_locks 00:06:00.667 ************************************ 00:06:00.667 10:42:49 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:00.926 * Looking for test storage... 00:06:00.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.926 10:42:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.926 10:42:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.926 10:42:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.926 10:42:50 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.926 10:42:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:00.926 10:42:50 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.926 10:42:50 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.926 --rc genhtml_branch_coverage=1 00:06:00.926 --rc genhtml_function_coverage=1 00:06:00.927 --rc genhtml_legend=1 00:06:00.927 --rc geninfo_all_blocks=1 00:06:00.927 --rc geninfo_unexecuted_blocks=1 00:06:00.927 00:06:00.927 ' 00:06:00.927 10:42:50 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.927 --rc genhtml_branch_coverage=1 00:06:00.927 --rc genhtml_function_coverage=1 00:06:00.927 --rc genhtml_legend=1 00:06:00.927 --rc geninfo_all_blocks=1 00:06:00.927 --rc geninfo_unexecuted_blocks=1 00:06:00.927 00:06:00.927 ' 00:06:00.927 10:42:50 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.927 --rc genhtml_branch_coverage=1 00:06:00.927 --rc genhtml_function_coverage=1 00:06:00.927 --rc genhtml_legend=1 00:06:00.927 --rc geninfo_all_blocks=1 00:06:00.927 --rc geninfo_unexecuted_blocks=1 00:06:00.927 00:06:00.927 ' 00:06:00.927 10:42:50 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.927 --rc genhtml_branch_coverage=1 00:06:00.927 --rc genhtml_function_coverage=1 00:06:00.927 --rc genhtml_legend=1 00:06:00.927 --rc geninfo_all_blocks=1 00:06:00.927 --rc geninfo_unexecuted_blocks=1 00:06:00.927 00:06:00.927 ' 00:06:00.927 10:42:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.927 10:42:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.927 10:42:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.927 10:42:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.927 10:42:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.927 10:42:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.927 10:42:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.927 ************************************ 00:06:00.927 START TEST default_locks 00:06:00.927 ************************************ 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59590 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59590 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59590 ']' 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.927 10:42:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.185 [2024-11-20 10:42:50.268048] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:01.185 [2024-11-20 10:42:50.268166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59590 ] 00:06:01.444 [2024-11-20 10:42:50.446327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.444 [2024-11-20 10:42:50.558881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.379 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.379 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:02.379 10:42:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59590 00:06:02.379 10:42:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59590 00:06:02.379 10:42:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.637 10:42:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59590 00:06:02.637 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59590 ']' 00:06:02.637 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59590 00:06:02.637 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.637 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.637 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59590 00:06:02.896 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.896 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.896 killing process with pid 59590 00:06:02.896 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59590' 00:06:02.896 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59590 00:06:02.896 10:42:51 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59590 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59590 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59590 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59590 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59590 ']' 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.431 ERROR: process (pid: 59590) is no longer running 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.431 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59590) - No such process 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.431 00:06:05.431 real 0m4.040s 00:06:05.431 user 0m3.924s 00:06:05.431 sys 0m0.737s 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.431 ************************************ 00:06:05.431 10:42:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.431 END TEST default_locks 00:06:05.431 ************************************ 00:06:05.431 10:42:54 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:05.431 10:42:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.431 10:42:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.431 10:42:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.431 ************************************ 00:06:05.431 START TEST default_locks_via_rpc 00:06:05.431 ************************************ 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59665 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59665 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59665 ']' 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.431 10:42:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.431 [2024-11-20 10:42:54.383744] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:05.431 [2024-11-20 10:42:54.383887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:06:05.431 [2024-11-20 10:42:54.557122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.431 [2024-11-20 10:42:54.668827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59665 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.372 10:42:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59665 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59665 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59665 ']' 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59665 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59665 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.940 killing process with pid 59665 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59665' 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59665 00:06:06.940 10:42:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59665 00:06:09.470 00:06:09.470 real 0m4.080s 00:06:09.470 user 0m4.041s 00:06:09.470 sys 0m0.685s 00:06:09.470 10:42:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.470 10:42:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.470 ************************************ 00:06:09.470 END TEST default_locks_via_rpc 00:06:09.470 ************************************ 00:06:09.470 10:42:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.470 10:42:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.470 10:42:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.470 10:42:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.470 ************************************ 00:06:09.470 START TEST non_locking_app_on_locked_coremask 00:06:09.470 ************************************ 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59739 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59739 /var/tmp/spdk.sock 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59739 ']' 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.470 10:42:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.470 [2024-11-20 10:42:58.530362] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:09.470 [2024-11-20 10:42:58.530501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59739 ] 00:06:09.470 [2024-11-20 10:42:58.713727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.729 [2024-11-20 10:42:58.826924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.663 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59755 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59755 /var/tmp/spdk2.sock 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59755 ']' 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.664 10:42:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.664 [2024-11-20 10:42:59.755558] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:10.664 [2024-11-20 10:42:59.755683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59755 ] 00:06:10.923 [2024-11-20 10:42:59.935434] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.923 [2024-11-20 10:42:59.935488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.923 [2024-11-20 10:43:00.163492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.498 10:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.498 10:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.498 10:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59739 00:06:13.498 10:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59739 00:06:13.498 10:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59739 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59739 ']' 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59739 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59739 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.067 killing process with pid 59739 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59739' 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59739 00:06:14.067 10:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59739 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59755 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59755 ']' 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59755 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59755 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.337 killing process with pid 59755 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59755' 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59755 00:06:19.337 10:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59755 00:06:21.241 00:06:21.241 real 0m11.706s 00:06:21.241 user 0m11.946s 00:06:21.241 sys 0m1.424s 00:06:21.241 10:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.241 ************************************ 00:06:21.241 END TEST non_locking_app_on_locked_coremask 00:06:21.241 ************************************ 00:06:21.241 10:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.241 10:43:10 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.241 10:43:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.241 10:43:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.241 10:43:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.241 ************************************ 00:06:21.241 START TEST locking_app_on_unlocked_coremask 00:06:21.241 ************************************ 00:06:21.241 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:21.241 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59906 00:06:21.241 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59906 /var/tmp/spdk.sock 00:06:21.241 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.241 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59906 ']' 00:06:21.241 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.242 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.242 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.242 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.242 10:43:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.242 [2024-11-20 10:43:10.311267] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:21.242 [2024-11-20 10:43:10.311387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:06:21.242 [2024-11-20 10:43:10.490102] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.242 [2024-11-20 10:43:10.490146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.501 [2024-11-20 10:43:10.599215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59927 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59927 /var/tmp/spdk2.sock 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59927 ']' 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.439 10:43:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.439 [2024-11-20 10:43:11.553217] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:22.439 [2024-11-20 10:43:11.553350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59927 ] 00:06:22.700 [2024-11-20 10:43:11.741410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.959 [2024-11-20 10:43:11.968421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.926 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.926 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.926 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59927 00:06:24.926 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59927 00:06:24.926 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59906 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59906 ']' 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59906 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59906 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.864 killing process with pid 59906 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59906' 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59906 00:06:25.864 10:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59906 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59927 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59927 ']' 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59927 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59927 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.141 killing process with pid 59927 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59927' 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59927 00:06:31.141 10:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59927 00:06:33.048 00:06:33.048 real 0m11.639s 00:06:33.048 user 0m11.923s 00:06:33.048 sys 0m1.438s 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.048 ************************************ 00:06:33.048 END TEST locking_app_on_unlocked_coremask 00:06:33.048 ************************************ 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.048 10:43:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:33.048 10:43:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.048 10:43:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.048 10:43:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.048 ************************************ 00:06:33.048 START TEST locking_app_on_locked_coremask 00:06:33.048 ************************************ 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60078 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60078 /var/tmp/spdk.sock 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60078 ']' 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.048 10:43:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.048 [2024-11-20 10:43:22.046411] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:33.048 [2024-11-20 10:43:22.046541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60078 ] 00:06:33.048 [2024-11-20 10:43:22.231344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.307 [2024-11-20 10:43:22.343482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60099 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60099 /var/tmp/spdk2.sock 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60099 /var/tmp/spdk2.sock 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60099 /var/tmp/spdk2.sock 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60099 ']' 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.245 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.245 [2024-11-20 10:43:23.328428] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:34.245 [2024-11-20 10:43:23.328641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60099 ] 00:06:34.504 [2024-11-20 10:43:23.545797] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60078 has claimed it. 00:06:34.505 [2024-11-20 10:43:23.545878] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.764 ERROR: process (pid: 60099) is no longer running 00:06:34.764 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60099) - No such process 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60078 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60078 00:06:34.764 10:43:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60078 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60078 ']' 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60078 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60078 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.334 killing process with pid 60078 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60078' 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60078 00:06:35.334 10:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60078 00:06:37.896 00:06:37.896 real 0m4.713s 00:06:37.896 user 0m4.863s 00:06:37.896 sys 0m0.878s 00:06:37.896 10:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.896 10:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.896 ************************************ 00:06:37.896 END TEST locking_app_on_locked_coremask 00:06:37.896 ************************************ 00:06:37.896 10:43:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.896 10:43:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.896 10:43:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.896 10:43:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.896 ************************************ 00:06:37.896 START TEST locking_overlapped_coremask 00:06:37.896 ************************************ 00:06:37.896 10:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60163 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60163 /var/tmp/spdk.sock 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60163 ']' 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.897 10:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.897 [2024-11-20 10:43:26.817900] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:37.897 [2024-11-20 10:43:26.818026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60163 ] 00:06:37.897 [2024-11-20 10:43:26.999659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.897 [2024-11-20 10:43:27.124903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.897 [2024-11-20 10:43:27.125074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.897 [2024-11-20 10:43:27.125109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60181 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60181 /var/tmp/spdk2.sock 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60181 /var/tmp/spdk2.sock 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60181 /var/tmp/spdk2.sock 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60181 ']' 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.833 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.092 [2024-11-20 10:43:28.109643] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:39.092 [2024-11-20 10:43:28.109754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60181 ] 00:06:39.092 [2024-11-20 10:43:28.294024] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60163 has claimed it. 00:06:39.092 [2024-11-20 10:43:28.294096] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.660 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60181) - No such process 00:06:39.660 ERROR: process (pid: 60181) is no longer running 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60163 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60163 ']' 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60163 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60163 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.660 killing process with pid 60163 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60163' 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60163 00:06:39.660 10:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60163 00:06:42.197 00:06:42.197 real 0m4.438s 00:06:42.197 user 0m11.956s 00:06:42.197 sys 0m0.635s 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.197 ************************************ 00:06:42.197 END TEST locking_overlapped_coremask 00:06:42.197 ************************************ 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.197 10:43:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.197 10:43:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.197 10:43:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.197 10:43:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.197 ************************************ 00:06:42.197 START TEST locking_overlapped_coremask_via_rpc 00:06:42.197 ************************************ 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60251 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60251 /var/tmp/spdk.sock 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60251 ']' 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.197 10:43:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.197 [2024-11-20 10:43:31.329567] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:42.197 [2024-11-20 10:43:31.329720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60251 ] 00:06:42.456 [2024-11-20 10:43:31.513199] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.456 [2024-11-20 10:43:31.513253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.456 [2024-11-20 10:43:31.640199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.456 [2024-11-20 10:43:31.640351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.456 [2024-11-20 10:43:31.640379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60269 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60269 /var/tmp/spdk2.sock 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60269 ']' 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.392 10:43:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.392 [2024-11-20 10:43:32.626390] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:43.392 [2024-11-20 10:43:32.626512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60269 ] 00:06:43.650 [2024-11-20 10:43:32.811224] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.650 [2024-11-20 10:43:32.811279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.908 [2024-11-20 10:43:33.046482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.908 [2024-11-20 10:43:33.049718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.908 [2024-11-20 10:43:33.049772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.443 [2024-11-20 10:43:35.201809] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60251 has claimed it. 00:06:46.443 request: 00:06:46.443 { 00:06:46.443 "method": "framework_enable_cpumask_locks", 00:06:46.443 "req_id": 1 00:06:46.443 } 00:06:46.443 Got JSON-RPC error response 00:06:46.443 response: 00:06:46.443 { 00:06:46.443 "code": -32603, 00:06:46.443 "message": "Failed to claim CPU core: 2" 00:06:46.443 } 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60251 /var/tmp/spdk.sock 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60251 ']' 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60269 /var/tmp/spdk2.sock 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60269 ']' 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.443 00:06:46.443 real 0m4.416s 00:06:46.443 user 0m1.249s 00:06:46.443 sys 0m0.226s 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.443 10:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.443 ************************************ 00:06:46.443 END TEST locking_overlapped_coremask_via_rpc 00:06:46.443 ************************************ 00:06:46.443 10:43:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:46.443 10:43:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60251 ]] 00:06:46.443 10:43:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60251 00:06:46.443 10:43:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60251 ']' 00:06:46.443 10:43:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60251 00:06:46.702 10:43:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:46.702 10:43:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.702 10:43:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60251 00:06:46.702 10:43:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.702 10:43:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.702 10:43:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60251' 00:06:46.702 killing process with pid 60251 00:06:46.702 10:43:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60251 00:06:46.702 10:43:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60251 00:06:49.260 10:43:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60269 ]] 00:06:49.260 10:43:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60269 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60269 ']' 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60269 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60269 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:49.260 killing process with pid 60269 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60269' 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60269 00:06:49.260 10:43:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60269 00:06:51.794 10:43:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.794 10:43:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:51.794 10:43:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60251 ]] 00:06:51.794 10:43:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60251 00:06:51.794 10:43:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60251 ']' 00:06:51.794 10:43:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60251 00:06:51.794 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60251) - No such process 00:06:51.794 Process with pid 60251 is not found 00:06:51.794 10:43:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60251 is not found' 00:06:51.794 10:43:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60269 ]] 00:06:51.794 10:43:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60269 00:06:51.794 10:43:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60269 ']' 00:06:51.794 10:43:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60269 00:06:51.794 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60269) - No such process 00:06:51.794 Process with pid 60269 is not found 00:06:51.794 10:43:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60269 is not found' 00:06:51.794 10:43:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.794 00:06:51.794 real 0m50.716s 00:06:51.794 user 1m26.097s 00:06:51.794 sys 0m7.307s 00:06:51.794 10:43:40 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.794 10:43:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.794 ************************************ 00:06:51.794 END TEST cpu_locks 00:06:51.794 ************************************ 00:06:51.794 00:06:51.794 real 1m22.317s 00:06:51.794 user 2m28.744s 00:06:51.794 sys 0m11.577s 00:06:51.794 10:43:40 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.794 10:43:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.794 ************************************ 00:06:51.794 END TEST event 00:06:51.794 ************************************ 00:06:51.794 10:43:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.794 10:43:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.794 10:43:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.794 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:51.794 ************************************ 00:06:51.794 START TEST thread 00:06:51.794 ************************************ 00:06:51.794 10:43:40 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.794 * Looking for test storage... 00:06:51.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:51.794 10:43:40 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.794 10:43:40 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.794 10:43:40 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.794 10:43:40 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.794 10:43:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.794 10:43:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.794 10:43:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.794 10:43:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.794 10:43:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.794 10:43:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.794 10:43:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.794 10:43:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.794 10:43:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.794 10:43:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.794 10:43:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.794 10:43:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:51.794 10:43:40 thread -- scripts/common.sh@345 -- # : 1 00:06:51.794 10:43:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.794 10:43:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.794 10:43:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:51.794 10:43:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:51.794 10:43:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.794 10:43:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:51.794 10:43:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.794 10:43:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:51.794 10:43:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:51.794 10:43:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.794 10:43:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:51.794 10:43:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.794 10:43:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.794 10:43:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.794 10:43:40 thread -- scripts/common.sh@368 -- # return 0 00:06:51.794 10:43:40 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.794 10:43:40 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.794 --rc genhtml_branch_coverage=1 00:06:51.794 --rc genhtml_function_coverage=1 00:06:51.794 --rc genhtml_legend=1 00:06:51.794 --rc geninfo_all_blocks=1 00:06:51.794 --rc geninfo_unexecuted_blocks=1 00:06:51.794 00:06:51.794 ' 00:06:51.794 10:43:40 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.794 --rc genhtml_branch_coverage=1 00:06:51.795 --rc genhtml_function_coverage=1 00:06:51.795 --rc genhtml_legend=1 00:06:51.795 --rc geninfo_all_blocks=1 00:06:51.795 --rc geninfo_unexecuted_blocks=1 00:06:51.795 00:06:51.795 ' 00:06:51.795 10:43:40 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.795 --rc genhtml_branch_coverage=1 00:06:51.795 --rc genhtml_function_coverage=1 00:06:51.795 --rc genhtml_legend=1 00:06:51.795 --rc geninfo_all_blocks=1 00:06:51.795 --rc geninfo_unexecuted_blocks=1 00:06:51.795 00:06:51.795 ' 00:06:51.795 10:43:40 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.795 --rc genhtml_branch_coverage=1 00:06:51.795 --rc genhtml_function_coverage=1 00:06:51.795 --rc genhtml_legend=1 00:06:51.795 --rc geninfo_all_blocks=1 00:06:51.795 --rc geninfo_unexecuted_blocks=1 00:06:51.795 00:06:51.795 ' 00:06:51.795 10:43:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.795 10:43:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:51.795 10:43:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.795 10:43:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.795 ************************************ 00:06:51.795 START TEST thread_poller_perf 00:06:51.795 ************************************ 00:06:51.795 10:43:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.795 [2024-11-20 10:43:41.041542] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:51.795 [2024-11-20 10:43:41.041671] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:06:52.054 [2024-11-20 10:43:41.220528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.312 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.312 [2024-11-20 10:43:41.327156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.690 [2024-11-20T10:43:42.943Z] ====================================== 00:06:53.690 [2024-11-20T10:43:42.943Z] busy:2500426826 (cyc) 00:06:53.690 [2024-11-20T10:43:42.943Z] total_run_count: 408000 00:06:53.690 [2024-11-20T10:43:42.943Z] tsc_hz: 2490000000 (cyc) 00:06:53.690 [2024-11-20T10:43:42.943Z] ====================================== 00:06:53.690 [2024-11-20T10:43:42.943Z] poller_cost: 6128 (cyc), 2461 (nsec) 00:06:53.690 00:06:53.690 real 0m1.561s 00:06:53.690 user 0m1.348s 00:06:53.690 sys 0m0.105s 00:06:53.690 10:43:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.690 10:43:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.690 ************************************ 00:06:53.690 END TEST thread_poller_perf 00:06:53.690 ************************************ 00:06:53.690 10:43:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.690 10:43:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:53.690 10:43:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.690 10:43:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.690 ************************************ 00:06:53.690 START TEST thread_poller_perf 00:06:53.690 ************************************ 00:06:53.690 10:43:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.690 [2024-11-20 10:43:42.675023] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:53.690 [2024-11-20 10:43:42.675156] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60506 ] 00:06:53.690 [2024-11-20 10:43:42.851325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.949 [2024-11-20 10:43:42.956885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.949 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.330 [2024-11-20T10:43:44.583Z] ====================================== 00:06:55.330 [2024-11-20T10:43:44.583Z] busy:2493647342 (cyc) 00:06:55.330 [2024-11-20T10:43:44.583Z] total_run_count: 5292000 00:06:55.330 [2024-11-20T10:43:44.583Z] tsc_hz: 2490000000 (cyc) 00:06:55.330 [2024-11-20T10:43:44.583Z] ====================================== 00:06:55.330 [2024-11-20T10:43:44.583Z] poller_cost: 471 (cyc), 189 (nsec) 00:06:55.330 00:06:55.330 real 0m1.552s 00:06:55.330 user 0m1.342s 00:06:55.330 sys 0m0.102s 00:06:55.330 10:43:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.330 ************************************ 00:06:55.330 END TEST thread_poller_perf 00:06:55.330 ************************************ 00:06:55.330 10:43:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.330 10:43:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.330 ************************************ 00:06:55.330 END TEST thread 00:06:55.330 ************************************ 00:06:55.330 00:06:55.330 real 0m3.478s 00:06:55.330 user 0m2.839s 00:06:55.330 sys 0m0.433s 00:06:55.330 10:43:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.330 10:43:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.330 10:43:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:55.330 10:43:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.330 10:43:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.330 10:43:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.330 10:43:44 -- common/autotest_common.sh@10 -- # set +x 00:06:55.330 ************************************ 00:06:55.330 START TEST app_cmdline 00:06:55.330 ************************************ 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:55.330 * Looking for test storage... 00:06:55.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.330 10:43:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.330 --rc genhtml_branch_coverage=1 00:06:55.330 --rc genhtml_function_coverage=1 00:06:55.330 --rc genhtml_legend=1 00:06:55.330 --rc geninfo_all_blocks=1 00:06:55.330 --rc geninfo_unexecuted_blocks=1 00:06:55.330 00:06:55.330 ' 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.330 --rc genhtml_branch_coverage=1 00:06:55.330 --rc genhtml_function_coverage=1 00:06:55.330 --rc genhtml_legend=1 00:06:55.330 --rc geninfo_all_blocks=1 00:06:55.330 --rc geninfo_unexecuted_blocks=1 00:06:55.330 00:06:55.330 ' 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.330 --rc genhtml_branch_coverage=1 00:06:55.330 --rc genhtml_function_coverage=1 00:06:55.330 --rc genhtml_legend=1 00:06:55.330 --rc geninfo_all_blocks=1 00:06:55.330 --rc geninfo_unexecuted_blocks=1 00:06:55.330 00:06:55.330 ' 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.330 --rc genhtml_branch_coverage=1 00:06:55.330 --rc genhtml_function_coverage=1 00:06:55.330 --rc genhtml_legend=1 00:06:55.330 --rc geninfo_all_blocks=1 00:06:55.330 --rc geninfo_unexecuted_blocks=1 00:06:55.330 00:06:55.330 ' 00:06:55.330 10:43:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:55.330 10:43:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60589 00:06:55.330 10:43:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:55.330 10:43:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60589 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60589 ']' 00:06:55.330 10:43:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.331 10:43:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.331 10:43:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.331 10:43:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.331 10:43:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.590 [2024-11-20 10:43:44.641225] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:06:55.590 [2024-11-20 10:43:44.641505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60589 ] 00:06:55.590 [2024-11-20 10:43:44.824409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.849 [2024-11-20 10:43:44.941211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.787 10:43:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.787 10:43:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:56.787 10:43:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:56.787 { 00:06:56.787 "version": "SPDK v25.01-pre git sha1 a5dab6cf7", 00:06:56.787 "fields": { 00:06:56.787 "major": 25, 00:06:56.787 "minor": 1, 00:06:56.787 "patch": 0, 00:06:56.787 "suffix": "-pre", 00:06:56.787 "commit": "a5dab6cf7" 00:06:56.787 } 00:06:56.787 } 00:06:56.787 10:43:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:56.787 10:43:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:56.787 10:43:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:56.787 10:43:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:56.787 10:43:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:56.787 10:43:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:56.787 10:43:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:56.787 10:43:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.787 10:43:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.787 10:43:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.787 10:43:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:56.787 10:43:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:56.787 10:43:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.787 10:43:46 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:56.787 10:43:46 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:56.787 10:43:46 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.787 10:43:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.787 10:43:46 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:56.787 10:43:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.787 10:43:46 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.046 request: 00:06:57.046 { 00:06:57.046 "method": "env_dpdk_get_mem_stats", 00:06:57.046 "req_id": 1 00:06:57.046 } 00:06:57.046 Got JSON-RPC error response 00:06:57.046 response: 00:06:57.046 { 00:06:57.046 "code": -32601, 00:06:57.046 "message": "Method not found" 00:06:57.046 } 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.046 10:43:46 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60589 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60589 ']' 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60589 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.046 10:43:46 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60589 00:06:57.046 killing process with pid 60589 00:06:57.047 10:43:46 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.047 10:43:46 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.047 10:43:46 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60589' 00:06:57.047 10:43:46 app_cmdline -- common/autotest_common.sh@973 -- # kill 60589 00:06:57.047 10:43:46 app_cmdline -- common/autotest_common.sh@978 -- # wait 60589 00:06:59.582 ************************************ 00:06:59.582 END TEST app_cmdline 00:06:59.582 ************************************ 00:06:59.582 00:06:59.582 real 0m4.290s 00:06:59.582 user 0m4.461s 00:06:59.582 sys 0m0.631s 00:06:59.582 10:43:48 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.582 10:43:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.582 10:43:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.582 10:43:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.582 10:43:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.582 10:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:59.582 ************************************ 00:06:59.582 START TEST version 00:06:59.582 ************************************ 00:06:59.582 10:43:48 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.582 * Looking for test storage... 00:06:59.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:59.582 10:43:48 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:59.582 10:43:48 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:59.582 10:43:48 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:59.842 10:43:48 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:59.842 10:43:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.842 10:43:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.842 10:43:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.842 10:43:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.842 10:43:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.842 10:43:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.842 10:43:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.842 10:43:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.842 10:43:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.842 10:43:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.842 10:43:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.842 10:43:48 version -- scripts/common.sh@344 -- # case "$op" in 00:06:59.842 10:43:48 version -- scripts/common.sh@345 -- # : 1 00:06:59.842 10:43:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.842 10:43:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.842 10:43:48 version -- scripts/common.sh@365 -- # decimal 1 00:06:59.842 10:43:48 version -- scripts/common.sh@353 -- # local d=1 00:06:59.842 10:43:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.842 10:43:48 version -- scripts/common.sh@355 -- # echo 1 00:06:59.842 10:43:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.842 10:43:48 version -- scripts/common.sh@366 -- # decimal 2 00:06:59.842 10:43:48 version -- scripts/common.sh@353 -- # local d=2 00:06:59.842 10:43:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.842 10:43:48 version -- scripts/common.sh@355 -- # echo 2 00:06:59.842 10:43:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.842 10:43:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.842 10:43:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.842 10:43:48 version -- scripts/common.sh@368 -- # return 0 00:06:59.842 10:43:48 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.842 10:43:48 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.842 --rc genhtml_branch_coverage=1 00:06:59.842 --rc genhtml_function_coverage=1 00:06:59.842 --rc genhtml_legend=1 00:06:59.842 --rc geninfo_all_blocks=1 00:06:59.842 --rc geninfo_unexecuted_blocks=1 00:06:59.842 00:06:59.842 ' 00:06:59.842 10:43:48 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.842 --rc genhtml_branch_coverage=1 00:06:59.842 --rc genhtml_function_coverage=1 00:06:59.842 --rc genhtml_legend=1 00:06:59.842 --rc geninfo_all_blocks=1 00:06:59.842 --rc geninfo_unexecuted_blocks=1 00:06:59.842 00:06:59.842 ' 00:06:59.842 10:43:48 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.842 --rc genhtml_branch_coverage=1 00:06:59.842 --rc genhtml_function_coverage=1 00:06:59.842 --rc genhtml_legend=1 00:06:59.842 --rc geninfo_all_blocks=1 00:06:59.842 --rc geninfo_unexecuted_blocks=1 00:06:59.842 00:06:59.842 ' 00:06:59.842 10:43:48 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:59.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.842 --rc genhtml_branch_coverage=1 00:06:59.842 --rc genhtml_function_coverage=1 00:06:59.842 --rc genhtml_legend=1 00:06:59.842 --rc geninfo_all_blocks=1 00:06:59.842 --rc geninfo_unexecuted_blocks=1 00:06:59.842 00:06:59.842 ' 00:06:59.842 10:43:48 version -- app/version.sh@17 -- # get_header_version major 00:06:59.842 10:43:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.842 10:43:48 version -- app/version.sh@14 -- # cut -f2 00:06:59.842 10:43:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.842 10:43:48 version -- app/version.sh@17 -- # major=25 00:06:59.842 10:43:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:59.842 10:43:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.842 10:43:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.842 10:43:48 version -- app/version.sh@14 -- # cut -f2 00:06:59.842 10:43:48 version -- app/version.sh@18 -- # minor=1 00:06:59.842 10:43:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:59.842 10:43:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.842 10:43:48 version -- app/version.sh@14 -- # cut -f2 00:06:59.842 10:43:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.842 10:43:48 version -- app/version.sh@19 -- # patch=0 00:06:59.842 10:43:48 version -- app/version.sh@20 -- # get_header_version suffix 00:06:59.842 10:43:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.842 10:43:48 version -- app/version.sh@14 -- # cut -f2 00:06:59.842 10:43:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.842 10:43:48 version -- app/version.sh@20 -- # suffix=-pre 00:06:59.842 10:43:48 version -- app/version.sh@22 -- # version=25.1 00:06:59.842 10:43:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:59.842 10:43:48 version -- app/version.sh@28 -- # version=25.1rc0 00:06:59.842 10:43:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:59.842 10:43:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:59.842 10:43:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:59.842 10:43:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:59.842 ************************************ 00:06:59.842 END TEST version 00:06:59.842 ************************************ 00:06:59.842 00:06:59.842 real 0m0.317s 00:06:59.842 user 0m0.191s 00:06:59.842 sys 0m0.182s 00:06:59.842 10:43:48 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.842 10:43:48 version -- common/autotest_common.sh@10 -- # set +x 00:06:59.842 10:43:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:59.842 10:43:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:59.842 10:43:49 -- spdk/autotest.sh@194 -- # uname -s 00:06:59.842 10:43:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:59.842 10:43:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.842 10:43:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.842 10:43:49 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:59.842 10:43:49 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:59.842 10:43:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.842 10:43:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.842 10:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:59.842 ************************************ 00:06:59.842 START TEST blockdev_nvme 00:06:59.842 ************************************ 00:06:59.842 10:43:49 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:00.106 * Looking for test storage... 00:07:00.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.107 10:43:49 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.107 --rc genhtml_branch_coverage=1 00:07:00.107 --rc genhtml_function_coverage=1 00:07:00.107 --rc genhtml_legend=1 00:07:00.107 --rc geninfo_all_blocks=1 00:07:00.107 --rc geninfo_unexecuted_blocks=1 00:07:00.107 00:07:00.107 ' 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.107 --rc genhtml_branch_coverage=1 00:07:00.107 --rc genhtml_function_coverage=1 00:07:00.107 --rc genhtml_legend=1 00:07:00.107 --rc geninfo_all_blocks=1 00:07:00.107 --rc geninfo_unexecuted_blocks=1 00:07:00.107 00:07:00.107 ' 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.107 --rc genhtml_branch_coverage=1 00:07:00.107 --rc genhtml_function_coverage=1 00:07:00.107 --rc genhtml_legend=1 00:07:00.107 --rc geninfo_all_blocks=1 00:07:00.107 --rc geninfo_unexecuted_blocks=1 00:07:00.107 00:07:00.107 ' 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.107 --rc genhtml_branch_coverage=1 00:07:00.107 --rc genhtml_function_coverage=1 00:07:00.107 --rc genhtml_legend=1 00:07:00.107 --rc geninfo_all_blocks=1 00:07:00.107 --rc geninfo_unexecuted_blocks=1 00:07:00.107 00:07:00.107 ' 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:00.107 10:43:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60778 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:00.107 10:43:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60778 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60778 ']' 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.107 10:43:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.366 [2024-11-20 10:43:49.410267] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:00.366 [2024-11-20 10:43:49.410583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60778 ] 00:07:00.366 [2024-11-20 10:43:49.591811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.625 [2024-11-20 10:43:49.696833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.562 10:43:50 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.562 10:43:50 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:01.562 10:43:50 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:01.562 10:43:50 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:01.562 10:43:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:01.562 10:43:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:01.562 10:43:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:01.562 10:43:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:01.562 10:43:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.562 10:43:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.821 10:43:50 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.821 10:43:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:01.821 10:43:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.821 10:43:50 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.821 10:43:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 10:43:51 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.821 10:43:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:01.821 10:43:51 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.821 10:43:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 10:43:51 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.822 10:43:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:01.822 10:43:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:01.822 10:43:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:01.822 10:43:51 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.822 10:43:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.081 10:43:51 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.081 10:43:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:02.081 10:43:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:02.082 10:43:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6da5f842-6fa5-4817-bb7b-727873adeef4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6da5f842-6fa5-4817-bb7b-727873adeef4",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "daef8bf4-fc3d-4e95-809b-193d49d3f696"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "daef8bf4-fc3d-4e95-809b-193d49d3f696",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3a78b302-035c-477e-92cc-2dba081fbfbf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3a78b302-035c-477e-92cc-2dba081fbfbf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "24110dfd-2ccb-435b-8842-1c11b53295f1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "24110dfd-2ccb-435b-8842-1c11b53295f1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1d9a96c0-1941-4b96-b560-6f343739e5e7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d9a96c0-1941-4b96-b560-6f343739e5e7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "17661d9f-22c3-454e-8d89-fff07592a586"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "17661d9f-22c3-454e-8d89-fff07592a586",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:02.082 10:43:51 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:02.082 10:43:51 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:02.082 10:43:51 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:02.082 10:43:51 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60778 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60778 ']' 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60778 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60778 00:07:02.082 killing process with pid 60778 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60778' 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60778 00:07:02.082 10:43:51 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60778 00:07:04.619 10:43:53 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:04.619 10:43:53 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:04.619 10:43:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:04.619 10:43:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.619 10:43:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.619 ************************************ 00:07:04.619 START TEST bdev_hello_world 00:07:04.619 ************************************ 00:07:04.619 10:43:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:04.619 [2024-11-20 10:43:53.590840] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:04.619 [2024-11-20 10:43:53.590957] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60875 ] 00:07:04.619 [2024-11-20 10:43:53.767535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.878 [2024-11-20 10:43:53.873794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.446 [2024-11-20 10:43:54.513455] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:05.446 [2024-11-20 10:43:54.513508] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:05.446 [2024-11-20 10:43:54.513545] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:05.446 [2024-11-20 10:43:54.516505] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:05.446 [2024-11-20 10:43:54.517224] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:05.446 [2024-11-20 10:43:54.517252] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:05.446 [2024-11-20 10:43:54.517420] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:05.446 00:07:05.446 [2024-11-20 10:43:54.517443] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:06.396 00:07:06.396 real 0m2.101s 00:07:06.396 user 0m1.744s 00:07:06.396 sys 0m0.251s 00:07:06.396 10:43:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.396 ************************************ 00:07:06.396 END TEST bdev_hello_world 00:07:06.396 ************************************ 00:07:06.396 10:43:55 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:06.655 10:43:55 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:06.655 10:43:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.655 10:43:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.655 10:43:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.655 ************************************ 00:07:06.655 START TEST bdev_bounds 00:07:06.655 ************************************ 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:06.655 Process bdevio pid: 60917 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60917 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60917' 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60917 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60917 ']' 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.655 10:43:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:06.655 [2024-11-20 10:43:55.769467] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:06.655 [2024-11-20 10:43:55.769611] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:07:06.915 [2024-11-20 10:43:55.952111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.915 [2024-11-20 10:43:56.067982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.915 [2024-11-20 10:43:56.068125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.915 [2024-11-20 10:43:56.068163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.482 10:43:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.482 10:43:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:07.483 10:43:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:07.742 I/O targets: 00:07:07.742 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:07.742 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:07.742 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.742 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.742 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:07.742 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:07.742 00:07:07.742 00:07:07.742 CUnit - A unit testing framework for C - Version 2.1-3 00:07:07.742 http://cunit.sourceforge.net/ 00:07:07.742 00:07:07.742 00:07:07.742 Suite: bdevio tests on: Nvme3n1 00:07:07.742 Test: blockdev write read block ...passed 00:07:07.742 Test: blockdev write zeroes read block ...passed 00:07:07.742 Test: blockdev write zeroes read no split ...passed 00:07:07.742 Test: blockdev write zeroes read split ...passed 00:07:07.742 Test: blockdev write zeroes read split partial ...passed 00:07:07.742 Test: blockdev reset ...[2024-11-20 10:43:56.885801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:07.743 [2024-11-20 10:43:56.889728] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:07:07.743 Test: blockdev write read 8 blocks ...uccessful. 00:07:07.743 passed 00:07:07.743 Test: blockdev write read size > 128k ...passed 00:07:07.743 Test: blockdev write read invalid size ...passed 00:07:07.743 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.743 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.743 Test: blockdev write read max offset ...passed 00:07:07.743 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.743 Test: blockdev writev readv 8 blocks ...passed 00:07:07.743 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.743 Test: blockdev writev readv block ...passed 00:07:07.743 Test: blockdev writev readv size > 128k ...passed 00:07:07.743 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.743 Test: blockdev comparev and writev ...[2024-11-20 10:43:56.900285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c280a000 len:0x1000 00:07:07.743 [2024-11-20 10:43:56.900338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.743 passed 00:07:07.743 Test: blockdev nvme passthru rw ...passed 00:07:07.743 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.743 Test: blockdev nvme admin passthru ...[2024-11-20 10:43:56.901327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.743 [2024-11-20 10:43:56.901361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:07.743 passed 00:07:07.743 Test: blockdev copy ...passed 00:07:07.743 Suite: bdevio tests on: Nvme2n3 00:07:07.743 Test: blockdev write read block ...passed 00:07:07.743 Test: blockdev write zeroes read block ...passed 00:07:07.743 Test: blockdev write zeroes read no split ...passed 00:07:07.743 Test: blockdev write zeroes read split ...passed 00:07:07.743 Test: blockdev write zeroes read split partial ...passed 00:07:07.743 Test: blockdev reset ...[2024-11-20 10:43:56.978698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:07.743 [2024-11-20 10:43:56.982721] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:07.743 Test: blockdev write read 8 blocks ...uccessful. 00:07:07.743 passed 00:07:07.743 Test: blockdev write read size > 128k ...passed 00:07:07.743 Test: blockdev write read invalid size ...passed 00:07:07.743 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:07.743 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:07.743 Test: blockdev write read max offset ...passed 00:07:07.743 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:07.743 Test: blockdev writev readv 8 blocks ...passed 00:07:07.743 Test: blockdev writev readv 30 x 1block ...passed 00:07:07.743 Test: blockdev writev readv block ...passed 00:07:07.743 Test: blockdev writev readv size > 128k ...passed 00:07:07.743 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:07.743 Test: blockdev comparev and writev ...[2024-11-20 10:43:56.992965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:07.743 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2a6206000 len:0x1000 00:07:07.743 [2024-11-20 10:43:56.993129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:07.743 passed 00:07:07.743 Test: blockdev nvme passthru vendor specific ...passed 00:07:07.743 Test: blockdev nvme admin passthru ...[2024-11-20 10:43:56.994062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:07.743 [2024-11-20 10:43:56.994101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.002 passed 00:07:08.002 Test: blockdev copy ...passed 00:07:08.002 Suite: bdevio tests on: Nvme2n2 00:07:08.002 Test: blockdev write read block ...passed 00:07:08.002 Test: blockdev write zeroes read block ...passed 00:07:08.002 Test: blockdev write zeroes read no split ...passed 00:07:08.002 Test: blockdev write zeroes read split ...passed 00:07:08.002 Test: blockdev write zeroes read split partial ...passed 00:07:08.002 Test: blockdev reset ...[2024-11-20 10:43:57.072690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:08.002 passed 00:07:08.002 Test: blockdev write read 8 blocks ...[2024-11-20 10:43:57.076658] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:08.002 passed 00:07:08.002 Test: blockdev write read size > 128k ...passed 00:07:08.002 Test: blockdev write read invalid size ...passed 00:07:08.002 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.002 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.002 Test: blockdev write read max offset ...passed 00:07:08.002 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.002 Test: blockdev writev readv 8 blocks ...passed 00:07:08.002 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.002 Test: blockdev writev readv block ...passed 00:07:08.002 Test: blockdev writev readv size > 128k ...passed 00:07:08.002 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.002 Test: blockdev comparev and writev ...[2024-11-20 10:43:57.085838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de03c000 len:0x1000 00:07:08.002 [2024-11-20 10:43:57.085883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.002 passed 00:07:08.002 Test: blockdev nvme passthru rw ...passed 00:07:08.002 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.002 Test: blockdev nvme admin passthru ...[2024-11-20 10:43:57.086792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.002 [2024-11-20 10:43:57.086834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.002 passed 00:07:08.002 Test: blockdev copy ...passed 00:07:08.002 Suite: bdevio tests on: Nvme2n1 00:07:08.002 Test: blockdev write read block ...passed 00:07:08.002 Test: blockdev write zeroes read block ...passed 00:07:08.002 Test: blockdev write zeroes read no split ...passed 00:07:08.002 Test: blockdev write zeroes read split ...passed 00:07:08.002 Test: blockdev write zeroes read split partial ...passed 00:07:08.002 Test: blockdev reset ...[2024-11-20 10:43:57.166351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:08.002 [2024-11-20 10:43:57.170709] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:08.002 Test: blockdev write read 8 blocks ...uccessful. 00:07:08.002 passed 00:07:08.002 Test: blockdev write read size > 128k ...passed 00:07:08.002 Test: blockdev write read invalid size ...passed 00:07:08.002 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.002 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.002 Test: blockdev write read max offset ...passed 00:07:08.002 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.002 Test: blockdev writev readv 8 blocks ...passed 00:07:08.002 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.002 Test: blockdev writev readv block ...passed 00:07:08.002 Test: blockdev writev readv size > 128k ...passed 00:07:08.002 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.002 Test: blockdev comparev and writev ...[2024-11-20 10:43:57.181215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de038000 len:0x1000 00:07:08.002 [2024-11-20 10:43:57.181394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.002 passed 00:07:08.002 Test: blockdev nvme passthru rw ...passed 00:07:08.002 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:43:57.182704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.002 [2024-11-20 10:43:57.182868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.002 passed 00:07:08.002 Test: blockdev nvme admin passthru ...passed 00:07:08.002 Test: blockdev copy ...passed 00:07:08.002 Suite: bdevio tests on: Nvme1n1 00:07:08.002 Test: blockdev write read block ...passed 00:07:08.002 Test: blockdev write zeroes read block ...passed 00:07:08.002 Test: blockdev write zeroes read no split ...passed 00:07:08.002 Test: blockdev write zeroes read split ...passed 00:07:08.262 Test: blockdev write zeroes read split partial ...passed 00:07:08.262 Test: blockdev reset ...[2024-11-20 10:43:57.259664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:08.262 passed 00:07:08.262 Test: blockdev write read 8 blocks ...[2024-11-20 10:43:57.263356] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:08.262 passed 00:07:08.262 Test: blockdev write read size > 128k ...passed 00:07:08.262 Test: blockdev write read invalid size ...passed 00:07:08.262 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.262 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.262 Test: blockdev write read max offset ...passed 00:07:08.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.262 Test: blockdev writev readv 8 blocks ...passed 00:07:08.262 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.262 Test: blockdev writev readv block ...passed 00:07:08.262 Test: blockdev writev readv size > 128k ...passed 00:07:08.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.262 Test: blockdev comparev and writev ...[2024-11-20 10:43:57.272839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:08.262 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2de034000 len:0x1000 00:07:08.262 [2024-11-20 10:43:57.272986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.262 passed 00:07:08.262 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:43:57.273940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.262 [2024-11-20 10:43:57.273972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.262 passed 00:07:08.262 Test: blockdev nvme admin passthru ...passed 00:07:08.262 Test: blockdev copy ...passed 00:07:08.262 Suite: bdevio tests on: Nvme0n1 00:07:08.262 Test: blockdev write read block ...passed 00:07:08.262 Test: blockdev write zeroes read block ...passed 00:07:08.262 Test: blockdev write zeroes read no split ...passed 00:07:08.262 Test: blockdev write zeroes read split ...passed 00:07:08.262 Test: blockdev write zeroes read split partial ...passed 00:07:08.262 Test: blockdev reset ...[2024-11-20 10:43:57.356870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:08.262 [2024-11-20 10:43:57.360621] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:07:08.262 Test: blockdev write read 8 blocks ...passed 00:07:08.262 Test: blockdev write read size > 128k ...uccessful. 00:07:08.262 passed 00:07:08.262 Test: blockdev write read invalid size ...passed 00:07:08.262 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.262 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.262 Test: blockdev write read max offset ...passed 00:07:08.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.262 Test: blockdev writev readv 8 blocks ...passed 00:07:08.262 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.262 Test: blockdev writev readv block ...passed 00:07:08.262 Test: blockdev writev readv size > 128k ...passed 00:07:08.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.262 Test: blockdev comparev and writev ...passed 00:07:08.262 Test: blockdev nvme passthru rw ...[2024-11-20 10:43:57.368906] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:08.262 separate metadata which is not supported yet. 00:07:08.262 passed 00:07:08.262 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:43:57.369545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:08.262 [2024-11-20 10:43:57.369612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:08.262 passed 00:07:08.262 Test: blockdev nvme admin passthru ...passed 00:07:08.262 Test: blockdev copy ...passed 00:07:08.262 00:07:08.262 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.262 suites 6 6 n/a 0 0 00:07:08.262 tests 138 138 138 0 0 00:07:08.262 asserts 893 893 893 0 n/a 00:07:08.262 00:07:08.262 Elapsed time = 1.514 seconds 00:07:08.262 0 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60917 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60917 ']' 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60917 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60917 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60917' 00:07:08.262 killing process with pid 60917 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60917 00:07:08.262 10:43:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60917 00:07:09.639 10:43:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:09.639 00:07:09.639 real 0m2.791s 00:07:09.639 user 0m7.077s 00:07:09.639 sys 0m0.421s 00:07:09.639 10:43:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.639 10:43:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.639 ************************************ 00:07:09.639 END TEST bdev_bounds 00:07:09.639 ************************************ 00:07:09.639 10:43:58 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.639 10:43:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:09.639 10:43:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.639 10:43:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.639 ************************************ 00:07:09.639 START TEST bdev_nbd 00:07:09.639 ************************************ 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.639 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60982 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60982 /var/tmp/spdk-nbd.sock 00:07:09.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60982 ']' 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.640 10:43:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.640 [2024-11-20 10:43:58.640829] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:09.640 [2024-11-20 10:43:58.640946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.640 [2024-11-20 10:43:58.820231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.899 [2024-11-20 10:43:58.932347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.468 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.727 1+0 records in 00:07:10.727 1+0 records out 00:07:10.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509076 s, 8.0 MB/s 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.727 10:43:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.986 1+0 records in 00:07:10.986 1+0 records out 00:07:10.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525151 s, 7.8 MB/s 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.986 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.245 1+0 records in 00:07:11.245 1+0 records out 00:07:11.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724024 s, 5.7 MB/s 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.245 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.504 1+0 records in 00:07:11.504 1+0 records out 00:07:11.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621425 s, 6.6 MB/s 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.504 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.763 1+0 records in 00:07:11.763 1+0 records out 00:07:11.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103565 s, 4.0 MB/s 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.763 10:44:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.023 1+0 records in 00:07:12.023 1+0 records out 00:07:12.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719811 s, 5.7 MB/s 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:12.023 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd0", 00:07:12.282 "bdev_name": "Nvme0n1" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd1", 00:07:12.282 "bdev_name": "Nvme1n1" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd2", 00:07:12.282 "bdev_name": "Nvme2n1" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd3", 00:07:12.282 "bdev_name": "Nvme2n2" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd4", 00:07:12.282 "bdev_name": "Nvme2n3" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd5", 00:07:12.282 "bdev_name": "Nvme3n1" 00:07:12.282 } 00:07:12.282 ]' 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd0", 00:07:12.282 "bdev_name": "Nvme0n1" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd1", 00:07:12.282 "bdev_name": "Nvme1n1" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd2", 00:07:12.282 "bdev_name": "Nvme2n1" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd3", 00:07:12.282 "bdev_name": "Nvme2n2" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd4", 00:07:12.282 "bdev_name": "Nvme2n3" 00:07:12.282 }, 00:07:12.282 { 00:07:12.282 "nbd_device": "/dev/nbd5", 00:07:12.282 "bdev_name": "Nvme3n1" 00:07:12.282 } 00:07:12.282 ]' 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.282 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.540 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.798 10:44:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:12.798 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:12.798 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:12.798 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:12.798 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.798 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.798 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:12.798 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.798 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.799 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.799 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.057 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.317 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.575 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.835 10:44:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:14.094 /dev/nbd0 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.094 1+0 records in 00:07:14.094 1+0 records out 00:07:14.094 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596973 s, 6.9 MB/s 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.094 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:14.399 /dev/nbd1 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.399 1+0 records in 00:07:14.399 1+0 records out 00:07:14.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669274 s, 6.1 MB/s 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.399 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:14.666 /dev/nbd10 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.666 1+0 records in 00:07:14.666 1+0 records out 00:07:14.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598666 s, 6.8 MB/s 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.666 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:14.929 /dev/nbd11 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.929 1+0 records in 00:07:14.929 1+0 records out 00:07:14.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789124 s, 5.2 MB/s 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.929 10:44:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:14.929 /dev/nbd12 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.189 1+0 records in 00:07:15.189 1+0 records out 00:07:15.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807536 s, 5.1 MB/s 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:15.189 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:15.189 /dev/nbd13 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.448 1+0 records in 00:07:15.448 1+0 records out 00:07:15.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697942 s, 5.9 MB/s 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.448 { 00:07:15.448 "nbd_device": "/dev/nbd0", 00:07:15.448 "bdev_name": "Nvme0n1" 00:07:15.448 }, 00:07:15.448 { 00:07:15.448 "nbd_device": "/dev/nbd1", 00:07:15.448 "bdev_name": "Nvme1n1" 00:07:15.448 }, 00:07:15.448 { 00:07:15.448 "nbd_device": "/dev/nbd10", 00:07:15.448 "bdev_name": "Nvme2n1" 00:07:15.448 }, 00:07:15.448 { 00:07:15.448 "nbd_device": "/dev/nbd11", 00:07:15.448 "bdev_name": "Nvme2n2" 00:07:15.448 }, 00:07:15.448 { 00:07:15.448 "nbd_device": "/dev/nbd12", 00:07:15.448 "bdev_name": "Nvme2n3" 00:07:15.448 }, 00:07:15.448 { 00:07:15.448 "nbd_device": "/dev/nbd13", 00:07:15.448 "bdev_name": "Nvme3n1" 00:07:15.448 } 00:07:15.448 ]' 00:07:15.448 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.448 { 00:07:15.448 "nbd_device": "/dev/nbd0", 00:07:15.449 "bdev_name": "Nvme0n1" 00:07:15.449 }, 00:07:15.449 { 00:07:15.449 "nbd_device": "/dev/nbd1", 00:07:15.449 "bdev_name": "Nvme1n1" 00:07:15.449 }, 00:07:15.449 { 00:07:15.449 "nbd_device": "/dev/nbd10", 00:07:15.449 "bdev_name": "Nvme2n1" 00:07:15.449 }, 00:07:15.449 { 00:07:15.449 "nbd_device": "/dev/nbd11", 00:07:15.449 "bdev_name": "Nvme2n2" 00:07:15.449 }, 00:07:15.449 { 00:07:15.449 "nbd_device": "/dev/nbd12", 00:07:15.449 "bdev_name": "Nvme2n3" 00:07:15.449 }, 00:07:15.449 { 00:07:15.449 "nbd_device": "/dev/nbd13", 00:07:15.449 "bdev_name": "Nvme3n1" 00:07:15.449 } 00:07:15.449 ]' 00:07:15.449 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.709 /dev/nbd1 00:07:15.709 /dev/nbd10 00:07:15.709 /dev/nbd11 00:07:15.709 /dev/nbd12 00:07:15.709 /dev/nbd13' 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.709 /dev/nbd1 00:07:15.709 /dev/nbd10 00:07:15.709 /dev/nbd11 00:07:15.709 /dev/nbd12 00:07:15.709 /dev/nbd13' 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:15.709 256+0 records in 00:07:15.709 256+0 records out 00:07:15.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120083 s, 87.3 MB/s 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.709 256+0 records in 00:07:15.709 256+0 records out 00:07:15.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12594 s, 8.3 MB/s 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.709 10:44:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.968 256+0 records in 00:07:15.968 256+0 records out 00:07:15.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127059 s, 8.3 MB/s 00:07:15.968 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.968 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:15.968 256+0 records in 00:07:15.968 256+0 records out 00:07:15.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125025 s, 8.4 MB/s 00:07:15.968 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.968 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:16.228 256+0 records in 00:07:16.228 256+0 records out 00:07:16.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12757 s, 8.2 MB/s 00:07:16.228 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.228 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:16.228 256+0 records in 00:07:16.228 256+0 records out 00:07:16.228 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127831 s, 8.2 MB/s 00:07:16.228 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.228 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:16.487 256+0 records in 00:07:16.487 256+0 records out 00:07:16.487 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125061 s, 8.4 MB/s 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.487 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.488 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.747 10:44:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.005 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.265 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.525 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.784 10:44:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:18.043 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:18.302 malloc_lvol_verify 00:07:18.302 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:18.561 23a1b79c-c819-45af-be16-f00b35a4fc25 00:07:18.561 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:18.561 cda6cdd9-7804-4f58-a57f-918746f7e317 00:07:18.561 10:44:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:18.819 /dev/nbd0 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:18.819 mke2fs 1.47.0 (5-Feb-2023) 00:07:18.819 Discarding device blocks: 0/4096 done 00:07:18.819 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:18.819 00:07:18.819 Allocating group tables: 0/1 done 00:07:18.819 Writing inode tables: 0/1 done 00:07:18.819 Creating journal (1024 blocks): done 00:07:18.819 Writing superblocks and filesystem accounting information: 0/1 done 00:07:18.819 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.819 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60982 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60982 ']' 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60982 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60982 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.078 killing process with pid 60982 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60982' 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60982 00:07:19.078 10:44:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60982 00:07:20.456 10:44:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:20.456 00:07:20.456 real 0m10.900s 00:07:20.456 user 0m14.075s 00:07:20.456 sys 0m4.522s 00:07:20.456 10:44:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.456 10:44:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:20.456 ************************************ 00:07:20.456 END TEST bdev_nbd 00:07:20.456 ************************************ 00:07:20.456 10:44:09 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:20.456 10:44:09 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:20.456 skipping fio tests on NVMe due to multi-ns failures. 00:07:20.456 10:44:09 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:20.456 10:44:09 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:20.456 10:44:09 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:20.456 10:44:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:20.456 10:44:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.456 10:44:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.456 ************************************ 00:07:20.456 START TEST bdev_verify 00:07:20.456 ************************************ 00:07:20.456 10:44:09 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:20.456 [2024-11-20 10:44:09.602198] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:20.456 [2024-11-20 10:44:09.602326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61368 ] 00:07:20.715 [2024-11-20 10:44:09.781197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:20.715 [2024-11-20 10:44:09.888492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.715 [2024-11-20 10:44:09.888523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.650 Running I/O for 5 seconds... 00:07:23.527 22784.00 IOPS, 89.00 MiB/s [2024-11-20T10:44:13.717Z] 21600.00 IOPS, 84.38 MiB/s [2024-11-20T10:44:15.114Z] 21504.00 IOPS, 84.00 MiB/s [2024-11-20T10:44:16.052Z] 22016.00 IOPS, 86.00 MiB/s [2024-11-20T10:44:16.052Z] 22425.60 IOPS, 87.60 MiB/s 00:07:26.799 Latency(us) 00:07:26.799 [2024-11-20T10:44:16.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.799 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x0 length 0xbd0bd 00:07:26.799 Nvme0n1 : 5.07 1841.19 7.19 0.00 0.00 69390.29 15054.86 70326.18 00:07:26.799 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:26.799 Nvme0n1 : 5.07 1867.10 7.29 0.00 0.00 68420.81 15581.25 75379.56 00:07:26.799 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x0 length 0xa0000 00:07:26.799 Nvme1n1 : 5.08 1840.57 7.19 0.00 0.00 69342.76 13159.84 65693.92 00:07:26.799 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0xa0000 length 0xa0000 00:07:26.799 Nvme1n1 : 5.07 1866.53 7.29 0.00 0.00 68237.77 15897.09 60640.54 00:07:26.799 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x0 length 0x80000 00:07:26.799 Nvme2n1 : 5.08 1840.05 7.19 0.00 0.00 69260.96 12212.33 61061.65 00:07:26.799 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x80000 length 0x80000 00:07:26.799 Nvme2n1 : 5.08 1865.23 7.29 0.00 0.00 68153.47 17160.43 58956.08 00:07:26.799 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x0 length 0x80000 00:07:26.799 Nvme2n2 : 5.08 1838.66 7.18 0.00 0.00 69167.54 14317.91 62746.11 00:07:26.799 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x80000 length 0x80000 00:07:26.799 Nvme2n2 : 5.08 1864.39 7.28 0.00 0.00 68055.07 16949.87 60640.54 00:07:26.799 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x0 length 0x80000 00:07:26.799 Nvme2n3 : 5.08 1837.77 7.18 0.00 0.00 69073.50 15791.81 64851.69 00:07:26.799 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x80000 length 0x80000 00:07:26.799 Nvme2n3 : 5.08 1863.87 7.28 0.00 0.00 67923.47 16949.87 62746.11 00:07:26.799 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:26.799 Verification LBA range: start 0x0 length 0x20000 00:07:26.799 Nvme3n1 : 5.09 1836.94 7.18 0.00 0.00 68994.44 16318.20 65693.92 00:07:26.800 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:26.800 Verification LBA range: start 0x20000 length 0x20000 00:07:26.800 Nvme3n1 : 5.08 1862.97 7.28 0.00 0.00 67841.16 15370.69 64851.69 00:07:26.800 [2024-11-20T10:44:16.053Z] =================================================================================================================== 00:07:26.800 [2024-11-20T10:44:16.053Z] Total : 22225.27 86.82 0.00 0.00 68651.36 12212.33 75379.56 00:07:28.176 00:07:28.176 real 0m7.609s 00:07:28.176 user 0m14.083s 00:07:28.176 sys 0m0.309s 00:07:28.176 10:44:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.176 10:44:17 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:28.176 ************************************ 00:07:28.176 END TEST bdev_verify 00:07:28.176 ************************************ 00:07:28.176 10:44:17 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:28.176 10:44:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:28.176 10:44:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.176 10:44:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.176 ************************************ 00:07:28.176 START TEST bdev_verify_big_io 00:07:28.176 ************************************ 00:07:28.176 10:44:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:28.176 [2024-11-20 10:44:17.284312] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:28.177 [2024-11-20 10:44:17.284426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61466 ] 00:07:28.436 [2024-11-20 10:44:17.465277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.436 [2024-11-20 10:44:17.578895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.436 [2024-11-20 10:44:17.578923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.374 Running I/O for 5 seconds... 00:07:32.846 2105.00 IOPS, 131.56 MiB/s [2024-11-20T10:44:23.475Z] 1984.50 IOPS, 124.03 MiB/s [2024-11-20T10:44:24.412Z] 2136.33 IOPS, 133.52 MiB/s [2024-11-20T10:44:24.412Z] 2720.75 IOPS, 170.05 MiB/s 00:07:35.159 Latency(us) 00:07:35.159 [2024-11-20T10:44:24.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.159 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x0 length 0xbd0b 00:07:35.159 Nvme0n1 : 5.61 137.00 8.56 0.00 0.00 916036.95 28214.70 815278.37 00:07:35.159 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:35.159 Nvme0n1 : 5.57 172.37 10.77 0.00 0.00 724322.27 33478.63 741162.15 00:07:35.159 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x0 length 0xa000 00:07:35.159 Nvme1n1 : 5.61 136.94 8.56 0.00 0.00 893925.13 81275.17 929821.61 00:07:35.159 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0xa000 length 0xa000 00:07:35.159 Nvme1n1 : 5.57 172.30 10.77 0.00 0.00 708266.28 48849.32 650201.34 00:07:35.159 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x0 length 0x8000 00:07:35.159 Nvme2n1 : 5.61 136.89 8.56 0.00 0.00 873733.55 81696.28 859074.31 00:07:35.159 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x8000 length 0x8000 00:07:35.159 Nvme2n1 : 5.62 178.60 11.16 0.00 0.00 672854.20 25477.45 660308.10 00:07:35.159 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x0 length 0x8000 00:07:35.159 Nvme2n2 : 5.61 136.84 8.55 0.00 0.00 853607.22 81696.28 875918.91 00:07:35.159 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x8000 length 0x8000 00:07:35.159 Nvme2n2 : 5.62 178.38 11.15 0.00 0.00 657836.36 25056.33 677152.69 00:07:35.159 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x0 length 0x8000 00:07:35.159 Nvme2n3 : 5.68 146.54 9.16 0.00 0.00 783273.95 20002.96 896132.42 00:07:35.159 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x8000 length 0x8000 00:07:35.159 Nvme2n3 : 5.62 182.05 11.38 0.00 0.00 631232.15 21792.69 704104.04 00:07:35.159 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x0 length 0x2000 00:07:35.159 Nvme3n1 : 5.70 157.30 9.83 0.00 0.00 715334.68 2737.25 916345.93 00:07:35.159 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.159 Verification LBA range: start 0x2000 length 0x2000 00:07:35.159 Nvme3n1 : 5.68 199.35 12.46 0.00 0.00 564435.32 6948.40 717579.72 00:07:35.159 [2024-11-20T10:44:24.412Z] =================================================================================================================== 00:07:35.159 [2024-11-20T10:44:24.412Z] Total : 1934.55 120.91 0.00 0.00 735572.49 2737.25 929821.61 00:07:37.087 00:07:37.087 real 0m8.742s 00:07:37.087 user 0m16.323s 00:07:37.087 sys 0m0.326s 00:07:37.087 10:44:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.087 10:44:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:37.087 ************************************ 00:07:37.087 END TEST bdev_verify_big_io 00:07:37.087 ************************************ 00:07:37.087 10:44:25 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.087 10:44:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:37.087 10:44:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.087 10:44:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.087 ************************************ 00:07:37.087 START TEST bdev_write_zeroes 00:07:37.087 ************************************ 00:07:37.087 10:44:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.087 [2024-11-20 10:44:26.107851] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:37.087 [2024-11-20 10:44:26.107972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61581 ] 00:07:37.087 [2024-11-20 10:44:26.277937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.346 [2024-11-20 10:44:26.393116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.913 Running I/O for 1 seconds... 00:07:39.105 79104.00 IOPS, 309.00 MiB/s 00:07:39.105 Latency(us) 00:07:39.105 [2024-11-20T10:44:28.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.105 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.105 Nvme0n1 : 1.02 13150.20 51.37 0.00 0.00 9711.47 8369.66 24635.22 00:07:39.105 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.105 Nvme1n1 : 1.02 13136.52 51.31 0.00 0.00 9709.79 8527.58 25056.33 00:07:39.105 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.105 Nvme2n1 : 1.02 13123.07 51.26 0.00 0.00 9679.62 8317.02 21897.97 00:07:39.105 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.105 Nvme2n2 : 1.02 13109.95 51.21 0.00 0.00 9658.34 8264.38 20634.63 00:07:39.105 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.105 Nvme2n3 : 1.02 13152.87 51.38 0.00 0.00 9606.64 5658.73 16844.59 00:07:39.105 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.105 Nvme3n1 : 1.02 13140.98 51.33 0.00 0.00 9593.88 5921.93 17897.38 00:07:39.105 [2024-11-20T10:44:28.358Z] =================================================================================================================== 00:07:39.105 [2024-11-20T10:44:28.358Z] Total : 78813.58 307.87 0.00 0.00 9659.86 5658.73 25056.33 00:07:40.040 00:07:40.040 real 0m3.232s 00:07:40.040 user 0m2.853s 00:07:40.040 sys 0m0.267s 00:07:40.040 10:44:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.040 ************************************ 00:07:40.040 END TEST bdev_write_zeroes 00:07:40.040 ************************************ 00:07:40.040 10:44:29 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:40.040 10:44:29 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.040 10:44:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:40.040 10:44:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.040 10:44:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.298 ************************************ 00:07:40.298 START TEST bdev_json_nonenclosed 00:07:40.298 ************************************ 00:07:40.299 10:44:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.299 [2024-11-20 10:44:29.388262] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:40.299 [2024-11-20 10:44:29.388396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61639 ] 00:07:40.557 [2024-11-20 10:44:29.569208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.557 [2024-11-20 10:44:29.680557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.557 [2024-11-20 10:44:29.680654] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:40.557 [2024-11-20 10:44:29.680676] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.557 [2024-11-20 10:44:29.680701] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.815 00:07:40.815 real 0m0.637s 00:07:40.815 user 0m0.387s 00:07:40.815 sys 0m0.146s 00:07:40.815 10:44:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.815 10:44:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:40.815 ************************************ 00:07:40.815 END TEST bdev_json_nonenclosed 00:07:40.815 ************************************ 00:07:40.815 10:44:29 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.815 10:44:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:40.815 10:44:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.815 10:44:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.815 ************************************ 00:07:40.815 START TEST bdev_json_nonarray 00:07:40.815 ************************************ 00:07:40.815 10:44:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.074 [2024-11-20 10:44:30.091407] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:41.074 [2024-11-20 10:44:30.091529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61665 ] 00:07:41.074 [2024-11-20 10:44:30.270362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.332 [2024-11-20 10:44:30.388176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.332 [2024-11-20 10:44:30.388269] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:41.332 [2024-11-20 10:44:30.388291] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:41.332 [2024-11-20 10:44:30.388303] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.592 00:07:41.592 real 0m0.642s 00:07:41.592 user 0m0.393s 00:07:41.592 sys 0m0.145s 00:07:41.592 10:44:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.592 10:44:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:41.592 ************************************ 00:07:41.592 END TEST bdev_json_nonarray 00:07:41.592 ************************************ 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:41.592 10:44:30 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:41.592 00:07:41.592 real 0m41.655s 00:07:41.592 user 1m1.465s 00:07:41.592 sys 0m7.592s 00:07:41.592 10:44:30 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.592 10:44:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.592 ************************************ 00:07:41.592 END TEST blockdev_nvme 00:07:41.592 ************************************ 00:07:41.592 10:44:30 -- spdk/autotest.sh@209 -- # uname -s 00:07:41.592 10:44:30 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:41.592 10:44:30 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:41.592 10:44:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.592 10:44:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.592 10:44:30 -- common/autotest_common.sh@10 -- # set +x 00:07:41.592 ************************************ 00:07:41.592 START TEST blockdev_nvme_gpt 00:07:41.592 ************************************ 00:07:41.592 10:44:30 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:41.852 * Looking for test storage... 00:07:41.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:41.852 10:44:30 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.852 10:44:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.852 10:44:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.852 10:44:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.852 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.852 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.853 10:44:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.853 10:44:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.853 --rc genhtml_branch_coverage=1 00:07:41.853 --rc genhtml_function_coverage=1 00:07:41.853 --rc genhtml_legend=1 00:07:41.853 --rc geninfo_all_blocks=1 00:07:41.853 --rc geninfo_unexecuted_blocks=1 00:07:41.853 00:07:41.853 ' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.853 --rc genhtml_branch_coverage=1 00:07:41.853 --rc genhtml_function_coverage=1 00:07:41.853 --rc genhtml_legend=1 00:07:41.853 --rc geninfo_all_blocks=1 00:07:41.853 --rc geninfo_unexecuted_blocks=1 00:07:41.853 00:07:41.853 ' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.853 --rc genhtml_branch_coverage=1 00:07:41.853 --rc genhtml_function_coverage=1 00:07:41.853 --rc genhtml_legend=1 00:07:41.853 --rc geninfo_all_blocks=1 00:07:41.853 --rc geninfo_unexecuted_blocks=1 00:07:41.853 00:07:41.853 ' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.853 --rc genhtml_branch_coverage=1 00:07:41.853 --rc genhtml_function_coverage=1 00:07:41.853 --rc genhtml_legend=1 00:07:41.853 --rc geninfo_all_blocks=1 00:07:41.853 --rc geninfo_unexecuted_blocks=1 00:07:41.853 00:07:41.853 ' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61748 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:41.853 10:44:31 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61748 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61748 ']' 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.853 10:44:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:42.114 [2024-11-20 10:44:31.140795] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:42.114 [2024-11-20 10:44:31.140923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61748 ] 00:07:42.114 [2024-11-20 10:44:31.322748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.373 [2024-11-20 10:44:31.437059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.311 10:44:32 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.311 10:44:32 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:43.311 10:44:32 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:43.311 10:44:32 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:43.311 10:44:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:43.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.830 Waiting for block devices as requested 00:07:43.830 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.090 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.090 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.349 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.650 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:49.650 BYT; 00:07:49.650 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:49.650 BYT; 00:07:49.650 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:49.650 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:49.650 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:49.650 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:49.651 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:49.651 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:49.651 10:44:38 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:49.651 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:49.651 10:44:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:50.587 The operation has completed successfully. 00:07:50.587 10:44:39 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:51.524 The operation has completed successfully. 00:07:51.524 10:44:40 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:52.092 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:53.029 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.029 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.029 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.029 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:53.288 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:53.288 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.288 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.288 [] 00:07:53.288 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.288 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:53.288 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:53.288 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:53.288 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:53.288 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:53.288 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.288 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.547 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.547 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:53.547 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.547 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.547 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.547 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:53.547 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:53.547 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.548 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.548 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.548 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:53.548 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.548 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.548 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.548 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:53.548 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.548 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.548 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.548 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:53.807 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:53.807 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.807 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.807 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:53.807 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.807 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:53.807 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:53.808 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d098b6fb-4835-4f33-8203-65eefbcff6d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d098b6fb-4835-4f33-8203-65eefbcff6d4",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d1b688ca-7fb7-4554-b31c-60fb7a811ce8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d1b688ca-7fb7-4554-b31c-60fb7a811ce8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "70fee438-e86d-40da-a5db-343988a95d95"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "70fee438-e86d-40da-a5db-343988a95d95",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "27f3305f-692a-4ea7-b1f7-e81730e01119"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "27f3305f-692a-4ea7-b1f7-e81730e01119",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2b907801-ab75-4fa8-918f-96fb6f3a1124"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2b907801-ab75-4fa8-918f-96fb6f3a1124",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:53.808 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:53.808 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:53.808 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:53.808 10:44:42 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61748 00:07:53.808 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61748 ']' 00:07:53.808 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61748 00:07:53.808 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:53.808 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.808 10:44:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61748 00:07:53.808 killing process with pid 61748 00:07:53.808 10:44:43 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.808 10:44:43 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.808 10:44:43 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61748' 00:07:53.808 10:44:43 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61748 00:07:53.808 10:44:43 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61748 00:07:56.341 10:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:56.341 10:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.341 10:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:56.341 10:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.341 10:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.341 ************************************ 00:07:56.341 START TEST bdev_hello_world 00:07:56.341 ************************************ 00:07:56.341 10:44:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:56.341 [2024-11-20 10:44:45.476368] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:56.341 [2024-11-20 10:44:45.476488] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62391 ] 00:07:56.600 [2024-11-20 10:44:45.657223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.600 [2024-11-20 10:44:45.777394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.537 [2024-11-20 10:44:46.422898] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:57.537 [2024-11-20 10:44:46.423085] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:57.537 [2024-11-20 10:44:46.423122] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:57.537 [2024-11-20 10:44:46.426078] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:57.537 [2024-11-20 10:44:46.426797] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:57.537 [2024-11-20 10:44:46.426830] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:57.537 [2024-11-20 10:44:46.427050] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:57.537 00:07:57.537 [2024-11-20 10:44:46.427072] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:58.476 00:07:58.476 real 0m2.149s 00:07:58.476 user 0m1.794s 00:07:58.476 sys 0m0.246s 00:07:58.476 10:44:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.476 ************************************ 00:07:58.476 END TEST bdev_hello_world 00:07:58.476 ************************************ 00:07:58.476 10:44:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:58.476 10:44:47 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:58.476 10:44:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.476 10:44:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.476 10:44:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.476 ************************************ 00:07:58.476 START TEST bdev_bounds 00:07:58.476 ************************************ 00:07:58.476 10:44:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:58.476 10:44:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62439 00:07:58.476 10:44:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:58.476 10:44:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:58.476 10:44:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62439' 00:07:58.476 Process bdevio pid: 62439 00:07:58.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.476 10:44:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62439 00:07:58.477 10:44:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62439 ']' 00:07:58.477 10:44:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.477 10:44:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.477 10:44:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.477 10:44:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.477 10:44:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:58.477 [2024-11-20 10:44:47.691624] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:07:58.477 [2024-11-20 10:44:47.691973] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62439 ] 00:07:58.736 [2024-11-20 10:44:47.873411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.995 [2024-11-20 10:44:47.992879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.995 [2024-11-20 10:44:47.993010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.995 [2024-11-20 10:44:47.993039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.562 10:44:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.562 10:44:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:59.562 10:44:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:59.562 I/O targets: 00:07:59.562 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:59.562 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:59.562 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:59.562 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:59.562 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:59.562 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:59.562 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:59.562 00:07:59.562 00:07:59.562 CUnit - A unit testing framework for C - Version 2.1-3 00:07:59.562 http://cunit.sourceforge.net/ 00:07:59.562 00:07:59.562 00:07:59.562 Suite: bdevio tests on: Nvme3n1 00:07:59.562 Test: blockdev write read block ...passed 00:07:59.562 Test: blockdev write zeroes read block ...passed 00:07:59.821 Test: blockdev write zeroes read no split ...passed 00:07:59.821 Test: blockdev write zeroes read split ...passed 00:07:59.821 Test: blockdev write zeroes read split partial ...passed 00:07:59.821 Test: blockdev reset ...[2024-11-20 10:44:48.875448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:59.821 [2024-11-20 10:44:48.879394] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:59.821 passed 00:07:59.821 Test: blockdev write read 8 blocks ...passed 00:07:59.821 Test: blockdev write read size > 128k ...passed 00:07:59.821 Test: blockdev write read invalid size ...passed 00:07:59.821 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.821 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.821 Test: blockdev write read max offset ...passed 00:07:59.821 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.821 Test: blockdev writev readv 8 blocks ...passed 00:07:59.821 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.821 Test: blockdev writev readv block ...passed 00:07:59.821 Test: blockdev writev readv size > 128k ...passed 00:07:59.821 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.821 Test: blockdev comparev and writev ...[2024-11-20 10:44:48.888830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0804000 len:0x1000 00:07:59.821 passed 00:07:59.821 Test: blockdev nvme passthru rw ...passed 00:07:59.821 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:44:48.889067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.821 [2024-11-20 10:44:48.889863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.821 [2024-11-20 10:44:48.889970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.821 passed 00:07:59.821 Test: blockdev nvme admin passthru ...passed 00:07:59.821 Test: blockdev copy ...passed 00:07:59.821 Suite: bdevio tests on: Nvme2n3 00:07:59.821 Test: blockdev write read block ...passed 00:07:59.821 Test: blockdev write zeroes read block ...passed 00:07:59.821 Test: blockdev write zeroes read no split ...passed 00:07:59.821 Test: blockdev write zeroes read split ...passed 00:07:59.821 Test: blockdev write zeroes read split partial ...passed 00:07:59.821 Test: blockdev reset ...[2024-11-20 10:44:48.980518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:59.821 [2024-11-20 10:44:48.984990] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:59.821 passed 00:07:59.821 Test: blockdev write read 8 blocks ...passed 00:07:59.821 Test: blockdev write read size > 128k ...passed 00:07:59.821 Test: blockdev write read invalid size ...passed 00:07:59.821 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:59.821 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:59.821 Test: blockdev write read max offset ...passed 00:07:59.821 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:59.821 Test: blockdev writev readv 8 blocks ...passed 00:07:59.821 Test: blockdev writev readv 30 x 1block ...passed 00:07:59.821 Test: blockdev writev readv block ...passed 00:07:59.821 Test: blockdev writev readv size > 128k ...passed 00:07:59.821 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:59.821 Test: blockdev comparev and writev ...[2024-11-20 10:44:48.994994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0802000 len:0x1000 00:07:59.821 [2024-11-20 10:44:48.995232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:59.821 passed 00:07:59.821 Test: blockdev nvme passthru rw ...passed 00:07:59.821 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:44:48.996458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:59.821 [2024-11-20 10:44:48.996686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:59.821 passed 00:07:59.821 Test: blockdev nvme admin passthru ...passed 00:07:59.821 Test: blockdev copy ...passed 00:07:59.821 Suite: bdevio tests on: Nvme2n2 00:07:59.821 Test: blockdev write read block ...passed 00:07:59.821 Test: blockdev write zeroes read block ...passed 00:07:59.821 Test: blockdev write zeroes read no split ...passed 00:07:59.821 Test: blockdev write zeroes read split ...passed 00:08:00.080 Test: blockdev write zeroes read split partial ...passed 00:08:00.080 Test: blockdev reset ...[2024-11-20 10:44:49.088000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:00.080 passed 00:08:00.080 Test: blockdev write read 8 blocks ...[2024-11-20 10:44:49.092176] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:00.080 passed 00:08:00.080 Test: blockdev write read size > 128k ...passed 00:08:00.080 Test: blockdev write read invalid size ...passed 00:08:00.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.080 Test: blockdev write read max offset ...passed 00:08:00.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.080 Test: blockdev writev readv 8 blocks ...passed 00:08:00.080 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.080 Test: blockdev writev readv block ...passed 00:08:00.080 Test: blockdev writev readv size > 128k ...passed 00:08:00.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.080 Test: blockdev comparev and writev ...[2024-11-20 10:44:49.101561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3638000 len:0x1000 00:08:00.080 passed 00:08:00.080 Test: blockdev nvme passthru rw ...[2024-11-20 10:44:49.101794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.080 passed 00:08:00.080 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:44:49.102639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.080 passed 00:08:00.080 Test: blockdev nvme admin passthru ...[2024-11-20 10:44:49.102856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:00.080 passed 00:08:00.080 Test: blockdev copy ...passed 00:08:00.080 Suite: bdevio tests on: Nvme2n1 00:08:00.080 Test: blockdev write read block ...passed 00:08:00.080 Test: blockdev write zeroes read block ...passed 00:08:00.080 Test: blockdev write zeroes read no split ...passed 00:08:00.080 Test: blockdev write zeroes read split ...passed 00:08:00.080 Test: blockdev write zeroes read split partial ...passed 00:08:00.080 Test: blockdev reset ...[2024-11-20 10:44:49.181773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:00.080 [2024-11-20 10:44:49.185734] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:00.080 Test: blockdev write read 8 blocks ...uccessful. 00:08:00.080 passed 00:08:00.080 Test: blockdev write read size > 128k ...passed 00:08:00.080 Test: blockdev write read invalid size ...passed 00:08:00.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.080 Test: blockdev write read max offset ...passed 00:08:00.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.080 Test: blockdev writev readv 8 blocks ...passed 00:08:00.080 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.080 Test: blockdev writev readv block ...passed 00:08:00.080 Test: blockdev writev readv size > 128k ...passed 00:08:00.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.080 Test: blockdev comparev and writev ...[2024-11-20 10:44:49.195507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3634000 len:0x1000 00:08:00.080 [2024-11-20 10:44:49.195770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.080 passed 00:08:00.080 Test: blockdev nvme passthru rw ...passed 00:08:00.080 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:44:49.196926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:00.080 [2024-11-20 10:44:49.197124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:00.080 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:00.080 passed 00:08:00.080 Test: blockdev copy ...passed 00:08:00.080 Suite: bdevio tests on: Nvme1n1p2 00:08:00.080 Test: blockdev write read block ...passed 00:08:00.080 Test: blockdev write zeroes read block ...passed 00:08:00.080 Test: blockdev write zeroes read no split ...passed 00:08:00.080 Test: blockdev write zeroes read split ...passed 00:08:00.080 Test: blockdev write zeroes read split partial ...passed 00:08:00.080 Test: blockdev reset ...[2024-11-20 10:44:49.276684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:00.080 [2024-11-20 10:44:49.280533] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:00.080 Test: blockdev write read 8 blocks ...uccessful. 00:08:00.080 passed 00:08:00.080 Test: blockdev write read size > 128k ...passed 00:08:00.080 Test: blockdev write read invalid size ...passed 00:08:00.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.080 Test: blockdev write read max offset ...passed 00:08:00.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.080 Test: blockdev writev readv 8 blocks ...passed 00:08:00.080 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.080 Test: blockdev writev readv block ...passed 00:08:00.080 Test: blockdev writev readv size > 128k ...passed 00:08:00.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.080 Test: blockdev comparev and writev ...[2024-11-20 10:44:49.289156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d3630000 len:0x1000 00:08:00.080 passed 00:08:00.080 Test: blockdev nvme passthru rw ...passed 00:08:00.080 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.080 Test: blockdev nvme admin passthru ...passed 00:08:00.080 Test: blockdev copy ...[2024-11-20 10:44:49.289367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.080 passed 00:08:00.080 Suite: bdevio tests on: Nvme1n1p1 00:08:00.080 Test: blockdev write read block ...passed 00:08:00.080 Test: blockdev write zeroes read block ...passed 00:08:00.080 Test: blockdev write zeroes read no split ...passed 00:08:00.080 Test: blockdev write zeroes read split ...passed 00:08:00.339 Test: blockdev write zeroes read split partial ...passed 00:08:00.339 Test: blockdev reset ...[2024-11-20 10:44:49.357257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:00.339 [2024-11-20 10:44:49.360866] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:00.339 Test: blockdev write read 8 blocks ...uccessful. 00:08:00.339 passed 00:08:00.339 Test: blockdev write read size > 128k ...passed 00:08:00.339 Test: blockdev write read invalid size ...passed 00:08:00.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.339 Test: blockdev write read max offset ...passed 00:08:00.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.339 Test: blockdev writev readv 8 blocks ...passed 00:08:00.339 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.339 Test: blockdev writev readv block ...passed 00:08:00.339 Test: blockdev writev readv size > 128k ...passed 00:08:00.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.339 Test: blockdev comparev and writev ...[2024-11-20 10:44:49.369484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c120e000 len:0x1000 00:08:00.339 passed 00:08:00.339 Test: blockdev nvme passthru rw ...passed 00:08:00.339 Test: blockdev nvme passthru vendor specific ...passed 00:08:00.339 Test: blockdev nvme admin passthru ...passed 00:08:00.339 Test: blockdev copy ...[2024-11-20 10:44:49.369716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:00.339 passed 00:08:00.339 Suite: bdevio tests on: Nvme0n1 00:08:00.339 Test: blockdev write read block ...passed 00:08:00.339 Test: blockdev write zeroes read block ...passed 00:08:00.339 Test: blockdev write zeroes read no split ...passed 00:08:00.339 Test: blockdev write zeroes read split ...passed 00:08:00.339 Test: blockdev write zeroes read split partial ...passed 00:08:00.339 Test: blockdev reset ...[2024-11-20 10:44:49.455225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:00.339 passed 00:08:00.339 Test: blockdev write read 8 blocks ...[2024-11-20 10:44:49.458915] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:00.339 passed 00:08:00.339 Test: blockdev write read size > 128k ...passed 00:08:00.339 Test: blockdev write read invalid size ...passed 00:08:00.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:00.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:00.339 Test: blockdev write read max offset ...passed 00:08:00.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:00.339 Test: blockdev writev readv 8 blocks ...passed 00:08:00.339 Test: blockdev writev readv 30 x 1block ...passed 00:08:00.339 Test: blockdev writev readv block ...passed 00:08:00.339 Test: blockdev writev readv size > 128k ...passed 00:08:00.339 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:00.339 Test: blockdev comparev and writev ...passed 00:08:00.339 Test: blockdev nvme passthru rw ...[2024-11-20 10:44:49.466333] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:00.339 separate metadata which is not supported yet. 00:08:00.339 passed 00:08:00.339 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:44:49.467017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:00.339 passed 00:08:00.339 Test: blockdev nvme admin passthru ...[2024-11-20 10:44:49.467209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:00.339 passed 00:08:00.339 Test: blockdev copy ...passed 00:08:00.339 00:08:00.339 Run Summary: Type Total Ran Passed Failed Inactive 00:08:00.339 suites 7 7 n/a 0 0 00:08:00.339 tests 161 161 161 0 0 00:08:00.339 asserts 1025 1025 1025 0 n/a 00:08:00.339 00:08:00.339 Elapsed time = 1.830 seconds 00:08:00.339 0 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62439 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62439 ']' 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62439 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62439 00:08:00.339 killing process with pid 62439 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62439' 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62439 00:08:00.339 10:44:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62439 00:08:01.717 10:44:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:01.717 00:08:01.717 real 0m2.964s 00:08:01.717 user 0m7.579s 00:08:01.717 sys 0m0.430s 00:08:01.717 10:44:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.717 ************************************ 00:08:01.717 END TEST bdev_bounds 00:08:01.717 ************************************ 00:08:01.717 10:44:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:01.717 10:44:50 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:01.717 10:44:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.717 10:44:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.717 10:44:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:01.717 ************************************ 00:08:01.717 START TEST bdev_nbd 00:08:01.717 ************************************ 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62505 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62505 /var/tmp/spdk-nbd.sock 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62505 ']' 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.718 10:44:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:01.718 [2024-11-20 10:44:50.740361] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:01.718 [2024-11-20 10:44:50.740909] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.718 [2024-11-20 10:44:50.924556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.977 [2024-11-20 10:44:51.039995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.545 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:02.803 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:02.803 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:02.803 10:44:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:02.803 10:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:02.803 10:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.803 10:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.804 10:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.804 10:44:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.804 1+0 records in 00:08:02.804 1+0 records out 00:08:02.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621214 s, 6.6 MB/s 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.804 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.063 1+0 records in 00:08:03.063 1+0 records out 00:08:03.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00271071 s, 1.5 MB/s 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.063 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:03.322 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.323 1+0 records in 00:08:03.323 1+0 records out 00:08:03.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699301 s, 5.9 MB/s 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.323 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.582 1+0 records in 00:08:03.582 1+0 records out 00:08:03.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825606 s, 5.0 MB/s 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:03.582 10:44:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:04.150 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:04.150 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:04.150 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:04.150 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:04.150 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.151 1+0 records in 00:08:04.151 1+0 records out 00:08:04.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719121 s, 5.7 MB/s 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.151 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:04.409 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:04.409 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:04.409 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:04.409 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:04.409 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.410 1+0 records in 00:08:04.410 1+0 records out 00:08:04.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0018157 s, 2.3 MB/s 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.410 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.669 1+0 records in 00:08:04.669 1+0 records out 00:08:04.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816118 s, 5.0 MB/s 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd0", 00:08:04.669 "bdev_name": "Nvme0n1" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd1", 00:08:04.669 "bdev_name": "Nvme1n1p1" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd2", 00:08:04.669 "bdev_name": "Nvme1n1p2" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd3", 00:08:04.669 "bdev_name": "Nvme2n1" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd4", 00:08:04.669 "bdev_name": "Nvme2n2" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd5", 00:08:04.669 "bdev_name": "Nvme2n3" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd6", 00:08:04.669 "bdev_name": "Nvme3n1" 00:08:04.669 } 00:08:04.669 ]' 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:04.669 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd0", 00:08:04.669 "bdev_name": "Nvme0n1" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd1", 00:08:04.669 "bdev_name": "Nvme1n1p1" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd2", 00:08:04.669 "bdev_name": "Nvme1n1p2" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd3", 00:08:04.669 "bdev_name": "Nvme2n1" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd4", 00:08:04.669 "bdev_name": "Nvme2n2" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd5", 00:08:04.669 "bdev_name": "Nvme2n3" 00:08:04.669 }, 00:08:04.669 { 00:08:04.669 "nbd_device": "/dev/nbd6", 00:08:04.669 "bdev_name": "Nvme3n1" 00:08:04.669 } 00:08:04.669 ]' 00:08:04.928 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:04.928 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:04.928 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.928 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:04.928 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.928 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:04.928 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.928 10:44:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.188 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.447 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.707 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.966 10:44:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.966 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.226 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.485 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.744 10:44:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:07.003 /dev/nbd0 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.003 1+0 records in 00:08:07.003 1+0 records out 00:08:07.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641657 s, 6.4 MB/s 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.003 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:07.571 /dev/nbd1 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.571 1+0 records in 00:08:07.571 1+0 records out 00:08:07.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704153 s, 5.8 MB/s 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.571 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:07.571 /dev/nbd10 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.830 1+0 records in 00:08:07.830 1+0 records out 00:08:07.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798096 s, 5.1 MB/s 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:07.830 10:44:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:07.830 /dev/nbd11 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.089 1+0 records in 00:08:08.089 1+0 records out 00:08:08.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688357 s, 6.0 MB/s 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:08.089 /dev/nbd12 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.089 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.349 1+0 records in 00:08:08.349 1+0 records out 00:08:08.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646382 s, 6.3 MB/s 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:08.349 /dev/nbd13 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.349 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.609 1+0 records in 00:08:08.609 1+0 records out 00:08:08.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811393 s, 5.0 MB/s 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:08.609 /dev/nbd14 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.609 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.868 1+0 records in 00:08:08.868 1+0 records out 00:08:08.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654474 s, 6.3 MB/s 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.868 10:44:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd0", 00:08:09.128 "bdev_name": "Nvme0n1" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd1", 00:08:09.128 "bdev_name": "Nvme1n1p1" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd10", 00:08:09.128 "bdev_name": "Nvme1n1p2" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd11", 00:08:09.128 "bdev_name": "Nvme2n1" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd12", 00:08:09.128 "bdev_name": "Nvme2n2" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd13", 00:08:09.128 "bdev_name": "Nvme2n3" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd14", 00:08:09.128 "bdev_name": "Nvme3n1" 00:08:09.128 } 00:08:09.128 ]' 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd0", 00:08:09.128 "bdev_name": "Nvme0n1" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd1", 00:08:09.128 "bdev_name": "Nvme1n1p1" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd10", 00:08:09.128 "bdev_name": "Nvme1n1p2" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd11", 00:08:09.128 "bdev_name": "Nvme2n1" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd12", 00:08:09.128 "bdev_name": "Nvme2n2" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd13", 00:08:09.128 "bdev_name": "Nvme2n3" 00:08:09.128 }, 00:08:09.128 { 00:08:09.128 "nbd_device": "/dev/nbd14", 00:08:09.128 "bdev_name": "Nvme3n1" 00:08:09.128 } 00:08:09.128 ]' 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:09.128 /dev/nbd1 00:08:09.128 /dev/nbd10 00:08:09.128 /dev/nbd11 00:08:09.128 /dev/nbd12 00:08:09.128 /dev/nbd13 00:08:09.128 /dev/nbd14' 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:09.128 /dev/nbd1 00:08:09.128 /dev/nbd10 00:08:09.128 /dev/nbd11 00:08:09.128 /dev/nbd12 00:08:09.128 /dev/nbd13 00:08:09.128 /dev/nbd14' 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:09.128 256+0 records in 00:08:09.128 256+0 records out 00:08:09.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012508 s, 83.8 MB/s 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:09.128 256+0 records in 00:08:09.128 256+0 records out 00:08:09.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139435 s, 7.5 MB/s 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.128 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:09.388 256+0 records in 00:08:09.388 256+0 records out 00:08:09.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140622 s, 7.5 MB/s 00:08:09.388 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.388 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:09.388 256+0 records in 00:08:09.388 256+0 records out 00:08:09.388 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139419 s, 7.5 MB/s 00:08:09.388 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.388 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:09.647 256+0 records in 00:08:09.647 256+0 records out 00:08:09.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141436 s, 7.4 MB/s 00:08:09.647 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.647 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:09.906 256+0 records in 00:08:09.906 256+0 records out 00:08:09.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140132 s, 7.5 MB/s 00:08:09.906 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.906 10:44:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:09.906 256+0 records in 00:08:09.906 256+0 records out 00:08:09.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137149 s, 7.6 MB/s 00:08:09.906 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:09.906 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:10.166 256+0 records in 00:08:10.166 256+0 records out 00:08:10.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133106 s, 7.9 MB/s 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.166 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.425 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.684 10:44:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.942 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.202 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.462 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.721 10:45:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:11.979 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:12.238 malloc_lvol_verify 00:08:12.238 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:12.497 1b38634a-d2e4-4e4d-8b62-2a87260a923c 00:08:12.497 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:12.497 be07862a-fa65-491c-b626-9fb967f98d11 00:08:12.497 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:12.756 /dev/nbd0 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:12.756 mke2fs 1.47.0 (5-Feb-2023) 00:08:12.756 Discarding device blocks: 0/4096 done 00:08:12.756 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:12.756 00:08:12.756 Allocating group tables: 0/1 done 00:08:12.756 Writing inode tables: 0/1 done 00:08:12.756 Creating journal (1024 blocks): done 00:08:12.756 Writing superblocks and filesystem accounting information: 0/1 done 00:08:12.756 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.756 10:45:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62505 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62505 ']' 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62505 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62505 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.015 killing process with pid 62505 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62505' 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62505 00:08:13.015 10:45:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62505 00:08:14.459 10:45:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:14.459 00:08:14.459 real 0m12.768s 00:08:14.459 user 0m16.561s 00:08:14.459 sys 0m5.343s 00:08:14.459 10:45:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.459 10:45:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:14.459 ************************************ 00:08:14.459 END TEST bdev_nbd 00:08:14.459 ************************************ 00:08:14.459 10:45:03 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:14.459 10:45:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:14.459 10:45:03 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:14.459 skipping fio tests on NVMe due to multi-ns failures. 00:08:14.459 10:45:03 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:14.459 10:45:03 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:14.459 10:45:03 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:14.459 10:45:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:14.459 10:45:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.459 10:45:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:14.459 ************************************ 00:08:14.459 START TEST bdev_verify 00:08:14.459 ************************************ 00:08:14.459 10:45:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:14.459 [2024-11-20 10:45:03.581432] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:14.459 [2024-11-20 10:45:03.581548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62926 ] 00:08:14.719 [2024-11-20 10:45:03.768487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.719 [2024-11-20 10:45:03.887078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.719 [2024-11-20 10:45:03.887115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.656 Running I/O for 5 seconds... 00:08:17.531 21632.00 IOPS, 84.50 MiB/s [2024-11-20T10:45:08.160Z] 20128.00 IOPS, 78.62 MiB/s [2024-11-20T10:45:09.096Z] 20608.00 IOPS, 80.50 MiB/s [2024-11-20T10:45:10.033Z] 21008.00 IOPS, 82.06 MiB/s [2024-11-20T10:45:10.033Z] 20851.20 IOPS, 81.45 MiB/s 00:08:20.781 Latency(us) 00:08:20.781 [2024-11-20T10:45:10.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.781 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x0 length 0xbd0bd 00:08:20.781 Nvme0n1 : 5.08 1474.66 5.76 0.00 0.00 86345.71 13370.40 83380.74 00:08:20.781 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:20.781 Nvme0n1 : 5.04 1446.40 5.65 0.00 0.00 88153.81 19792.40 88013.01 00:08:20.781 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x0 length 0x4ff80 00:08:20.781 Nvme1n1p1 : 5.08 1473.62 5.76 0.00 0.00 86188.24 15265.41 75800.67 00:08:20.781 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:20.781 Nvme1n1p1 : 5.05 1445.97 5.65 0.00 0.00 87937.46 21897.97 82538.51 00:08:20.781 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x0 length 0x4ff7f 00:08:20.781 Nvme1n1p2 : 5.10 1481.45 5.79 0.00 0.00 85871.71 12686.09 74958.44 00:08:20.781 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:20.781 Nvme1n1p2 : 5.10 1456.38 5.69 0.00 0.00 87213.17 15686.53 77064.02 00:08:20.781 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x0 length 0x80000 00:08:20.781 Nvme2n1 : 5.10 1480.71 5.78 0.00 0.00 85749.09 12791.36 72431.76 00:08:20.781 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x80000 length 0x80000 00:08:20.781 Nvme2n1 : 5.10 1455.66 5.69 0.00 0.00 87040.54 17160.43 74116.22 00:08:20.781 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x0 length 0x80000 00:08:20.781 Nvme2n2 : 5.10 1480.01 5.78 0.00 0.00 85651.16 14107.35 72010.64 00:08:20.781 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x80000 length 0x80000 00:08:20.781 Nvme2n2 : 5.10 1454.98 5.68 0.00 0.00 86916.26 17897.38 74537.33 00:08:20.781 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x0 length 0x80000 00:08:20.781 Nvme2n3 : 5.11 1479.31 5.78 0.00 0.00 85541.44 14633.74 75800.67 00:08:20.781 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x80000 length 0x80000 00:08:20.781 Nvme2n3 : 5.10 1454.31 5.68 0.00 0.00 86809.39 17792.10 77064.02 00:08:20.781 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x0 length 0x20000 00:08:20.781 Nvme3n1 : 5.11 1478.64 5.78 0.00 0.00 85421.90 16002.36 77906.25 00:08:20.781 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:20.781 Verification LBA range: start 0x20000 length 0x20000 00:08:20.781 Nvme3n1 : 5.11 1453.64 5.68 0.00 0.00 86728.64 14317.91 80432.94 00:08:20.781 [2024-11-20T10:45:10.034Z] =================================================================================================================== 00:08:20.781 [2024-11-20T10:45:10.034Z] Total : 20515.74 80.14 0.00 0.00 86531.11 12686.09 88013.01 00:08:22.684 00:08:22.684 real 0m7.980s 00:08:22.684 user 0m14.703s 00:08:22.684 sys 0m0.340s 00:08:22.684 10:45:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.684 10:45:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:22.684 ************************************ 00:08:22.684 END TEST bdev_verify 00:08:22.684 ************************************ 00:08:22.684 10:45:11 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:22.684 10:45:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:22.684 10:45:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.684 10:45:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:22.684 ************************************ 00:08:22.684 START TEST bdev_verify_big_io 00:08:22.684 ************************************ 00:08:22.684 10:45:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:22.684 [2024-11-20 10:45:11.632646] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:22.684 [2024-11-20 10:45:11.632958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63035 ] 00:08:22.684 [2024-11-20 10:45:11.814116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.943 [2024-11-20 10:45:11.970833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.943 [2024-11-20 10:45:11.970866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.881 Running I/O for 5 seconds... 00:08:29.744 2459.00 IOPS, 153.69 MiB/s [2024-11-20T10:45:18.997Z] 3739.00 IOPS, 233.69 MiB/s [2024-11-20T10:45:18.998Z] 4045.33 IOPS, 252.83 MiB/s 00:08:29.745 Latency(us) 00:08:29.745 [2024-11-20T10:45:18.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.745 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x0 length 0xbd0b 00:08:29.745 Nvme0n1 : 5.68 108.05 6.75 0.00 0.00 1129424.31 26846.07 2102205.38 00:08:29.745 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:29.745 Nvme0n1 : 5.60 163.31 10.21 0.00 0.00 753548.68 27372.47 1253237.82 00:08:29.745 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x0 length 0x4ff8 00:08:29.745 Nvme1n1p1 : 5.77 119.92 7.49 0.00 0.00 1010997.89 66957.26 990462.15 00:08:29.745 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:29.745 Nvme1n1p1 : 5.60 163.44 10.21 0.00 0.00 734998.98 41690.37 1266713.50 00:08:29.745 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x0 length 0x4ff7 00:08:29.745 Nvme1n1p2 : 5.77 110.34 6.90 0.00 0.00 1076400.32 82538.51 1913545.92 00:08:29.745 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:29.745 Nvme1n1p2 : 5.68 168.02 10.50 0.00 0.00 700595.78 55166.05 1273451.33 00:08:29.745 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x0 length 0x8000 00:08:29.745 Nvme2n1 : 5.77 120.52 7.53 0.00 0.00 962335.79 82538.51 1313878.36 00:08:29.745 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x8000 length 0x8000 00:08:29.745 Nvme2n1 : 5.69 170.39 10.65 0.00 0.00 676358.27 70326.18 1286927.01 00:08:29.745 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x0 length 0x8000 00:08:29.745 Nvme2n2 : 5.80 126.91 7.93 0.00 0.00 899754.88 21476.86 943297.29 00:08:29.745 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x8000 length 0x8000 00:08:29.745 Nvme2n2 : 5.81 180.47 11.28 0.00 0.00 625056.61 49270.44 1300402.69 00:08:29.745 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x0 length 0x8000 00:08:29.745 Nvme2n3 : 5.81 132.20 8.26 0.00 0.00 846606.33 7158.95 1098267.55 00:08:29.745 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x8000 length 0x8000 00:08:29.745 Nvme2n3 : 5.82 188.84 11.80 0.00 0.00 585150.44 14212.63 1320616.20 00:08:29.745 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x0 length 0x2000 00:08:29.745 Nvme3n1 : 5.82 136.88 8.56 0.00 0.00 798288.68 3000.44 1118481.07 00:08:29.745 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.745 Verification LBA range: start 0x2000 length 0x2000 00:08:29.745 Nvme3n1 : 5.88 222.02 13.88 0.00 0.00 487556.10 1177.81 1084791.88 00:08:29.745 [2024-11-20T10:45:18.998Z] =================================================================================================================== 00:08:29.745 [2024-11-20T10:45:18.998Z] Total : 2111.30 131.96 0.00 0.00 767127.93 1177.81 2102205.38 00:08:31.648 00:08:31.648 real 0m9.274s 00:08:31.648 user 0m17.182s 00:08:31.648 sys 0m0.457s 00:08:31.648 ************************************ 00:08:31.648 END TEST bdev_verify_big_io 00:08:31.648 ************************************ 00:08:31.648 10:45:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.648 10:45:20 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:31.648 10:45:20 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:31.648 10:45:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:31.648 10:45:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.648 10:45:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:31.648 ************************************ 00:08:31.648 START TEST bdev_write_zeroes 00:08:31.648 ************************************ 00:08:31.648 10:45:20 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:31.907 [2024-11-20 10:45:21.000808] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:31.907 [2024-11-20 10:45:21.000943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63154 ] 00:08:32.166 [2024-11-20 10:45:21.190263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.166 [2024-11-20 10:45:21.299635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.732 Running I/O for 1 seconds... 00:08:34.107 68544.00 IOPS, 267.75 MiB/s 00:08:34.107 Latency(us) 00:08:34.107 [2024-11-20T10:45:23.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.107 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.107 Nvme0n1 : 1.02 9754.41 38.10 0.00 0.00 13095.89 10948.99 34531.42 00:08:34.107 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.107 Nvme1n1p1 : 1.02 9744.70 38.07 0.00 0.00 13090.79 10896.35 34741.98 00:08:34.107 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.107 Nvme1n1p2 : 1.03 9735.23 38.03 0.00 0.00 13065.76 10685.79 33268.07 00:08:34.107 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.107 Nvme2n1 : 1.03 9726.48 37.99 0.00 0.00 13008.49 10843.71 29688.60 00:08:34.107 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.107 Nvme2n2 : 1.03 9717.73 37.96 0.00 0.00 12971.13 10896.35 28846.37 00:08:34.107 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.107 Nvme2n3 : 1.03 9708.90 37.93 0.00 0.00 12944.64 9843.56 26846.07 00:08:34.107 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:34.107 Nvme3n1 : 1.03 9700.07 37.89 0.00 0.00 12915.70 8474.94 25161.61 00:08:34.107 [2024-11-20T10:45:23.360Z] =================================================================================================================== 00:08:34.107 [2024-11-20T10:45:23.360Z] Total : 68087.52 265.97 0.00 0.00 13013.20 8474.94 34741.98 00:08:35.046 00:08:35.046 real 0m3.262s 00:08:35.046 user 0m2.844s 00:08:35.046 sys 0m0.300s 00:08:35.046 ************************************ 00:08:35.046 END TEST bdev_write_zeroes 00:08:35.046 ************************************ 00:08:35.046 10:45:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.046 10:45:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:35.046 10:45:24 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.046 10:45:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:35.046 10:45:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.046 10:45:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.046 ************************************ 00:08:35.046 START TEST bdev_json_nonenclosed 00:08:35.046 ************************************ 00:08:35.046 10:45:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.305 [2024-11-20 10:45:24.319038] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:35.305 [2024-11-20 10:45:24.319659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63208 ] 00:08:35.305 [2024-11-20 10:45:24.491673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.564 [2024-11-20 10:45:24.605490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.564 [2024-11-20 10:45:24.605816] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:35.564 [2024-11-20 10:45:24.605848] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:35.564 [2024-11-20 10:45:24.605862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.823 00:08:35.823 real 0m0.654s 00:08:35.823 user 0m0.394s 00:08:35.823 sys 0m0.155s 00:08:35.823 10:45:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.823 ************************************ 00:08:35.823 END TEST bdev_json_nonenclosed 00:08:35.823 ************************************ 00:08:35.823 10:45:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:35.823 10:45:24 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.823 10:45:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:35.823 10:45:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.823 10:45:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.823 ************************************ 00:08:35.823 START TEST bdev_json_nonarray 00:08:35.823 ************************************ 00:08:35.823 10:45:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:35.823 [2024-11-20 10:45:25.000953] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:35.823 [2024-11-20 10:45:25.001068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63234 ] 00:08:36.082 [2024-11-20 10:45:25.163035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.082 [2024-11-20 10:45:25.278277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.082 [2024-11-20 10:45:25.278372] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:36.082 [2024-11-20 10:45:25.278394] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:36.082 [2024-11-20 10:45:25.278406] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.341 00:08:36.341 real 0m0.611s 00:08:36.341 user 0m0.390s 00:08:36.341 sys 0m0.118s 00:08:36.341 ************************************ 00:08:36.341 END TEST bdev_json_nonarray 00:08:36.341 ************************************ 00:08:36.341 10:45:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.341 10:45:25 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:36.341 10:45:25 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:36.341 10:45:25 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:36.341 10:45:25 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:36.341 10:45:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.341 10:45:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.341 10:45:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.600 ************************************ 00:08:36.600 START TEST bdev_gpt_uuid 00:08:36.600 ************************************ 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:36.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63259 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63259 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63259 ']' 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.600 10:45:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:36.600 [2024-11-20 10:45:25.708359] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:36.600 [2024-11-20 10:45:25.708481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63259 ] 00:08:36.859 [2024-11-20 10:45:25.869733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.859 [2024-11-20 10:45:25.986528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.796 10:45:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:37.796 10:45:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:37.796 10:45:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:37.796 10:45:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.796 10:45:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:38.054 Some configs were skipped because the RPC state that can call them passed over. 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.054 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:38.054 { 00:08:38.054 "name": "Nvme1n1p1", 00:08:38.054 "aliases": [ 00:08:38.054 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:38.054 ], 00:08:38.054 "product_name": "GPT Disk", 00:08:38.054 "block_size": 4096, 00:08:38.054 "num_blocks": 655104, 00:08:38.054 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:38.054 "assigned_rate_limits": { 00:08:38.054 "rw_ios_per_sec": 0, 00:08:38.054 "rw_mbytes_per_sec": 0, 00:08:38.054 "r_mbytes_per_sec": 0, 00:08:38.054 "w_mbytes_per_sec": 0 00:08:38.054 }, 00:08:38.054 "claimed": false, 00:08:38.054 "zoned": false, 00:08:38.054 "supported_io_types": { 00:08:38.054 "read": true, 00:08:38.054 "write": true, 00:08:38.054 "unmap": true, 00:08:38.054 "flush": true, 00:08:38.054 "reset": true, 00:08:38.054 "nvme_admin": false, 00:08:38.054 "nvme_io": false, 00:08:38.054 "nvme_io_md": false, 00:08:38.054 "write_zeroes": true, 00:08:38.054 "zcopy": false, 00:08:38.055 "get_zone_info": false, 00:08:38.055 "zone_management": false, 00:08:38.055 "zone_append": false, 00:08:38.055 "compare": true, 00:08:38.055 "compare_and_write": false, 00:08:38.055 "abort": true, 00:08:38.055 "seek_hole": false, 00:08:38.055 "seek_data": false, 00:08:38.055 "copy": true, 00:08:38.055 "nvme_iov_md": false 00:08:38.055 }, 00:08:38.055 "driver_specific": { 00:08:38.055 "gpt": { 00:08:38.055 "base_bdev": "Nvme1n1", 00:08:38.055 "offset_blocks": 256, 00:08:38.055 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:38.055 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:38.055 "partition_name": "SPDK_TEST_first" 00:08:38.055 } 00:08:38.055 } 00:08:38.055 } 00:08:38.055 ]' 00:08:38.055 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:38.055 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:38.055 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:38.312 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:38.312 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:38.312 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:38.312 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:38.312 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.312 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:38.312 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.312 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:38.312 { 00:08:38.312 "name": "Nvme1n1p2", 00:08:38.312 "aliases": [ 00:08:38.312 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:38.312 ], 00:08:38.312 "product_name": "GPT Disk", 00:08:38.312 "block_size": 4096, 00:08:38.312 "num_blocks": 655103, 00:08:38.312 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:38.313 "assigned_rate_limits": { 00:08:38.313 "rw_ios_per_sec": 0, 00:08:38.313 "rw_mbytes_per_sec": 0, 00:08:38.313 "r_mbytes_per_sec": 0, 00:08:38.313 "w_mbytes_per_sec": 0 00:08:38.313 }, 00:08:38.313 "claimed": false, 00:08:38.313 "zoned": false, 00:08:38.313 "supported_io_types": { 00:08:38.313 "read": true, 00:08:38.313 "write": true, 00:08:38.313 "unmap": true, 00:08:38.313 "flush": true, 00:08:38.313 "reset": true, 00:08:38.313 "nvme_admin": false, 00:08:38.313 "nvme_io": false, 00:08:38.313 "nvme_io_md": false, 00:08:38.313 "write_zeroes": true, 00:08:38.313 "zcopy": false, 00:08:38.313 "get_zone_info": false, 00:08:38.313 "zone_management": false, 00:08:38.313 "zone_append": false, 00:08:38.313 "compare": true, 00:08:38.313 "compare_and_write": false, 00:08:38.313 "abort": true, 00:08:38.313 "seek_hole": false, 00:08:38.313 "seek_data": false, 00:08:38.313 "copy": true, 00:08:38.313 "nvme_iov_md": false 00:08:38.313 }, 00:08:38.313 "driver_specific": { 00:08:38.313 "gpt": { 00:08:38.313 "base_bdev": "Nvme1n1", 00:08:38.313 "offset_blocks": 655360, 00:08:38.313 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:38.313 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:38.313 "partition_name": "SPDK_TEST_second" 00:08:38.313 } 00:08:38.313 } 00:08:38.313 } 00:08:38.313 ]' 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63259 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63259 ']' 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63259 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.313 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63259 00:08:38.571 killing process with pid 63259 00:08:38.571 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:38.571 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:38.571 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63259' 00:08:38.571 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63259 00:08:38.571 10:45:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63259 00:08:41.106 ************************************ 00:08:41.106 END TEST bdev_gpt_uuid 00:08:41.106 ************************************ 00:08:41.106 00:08:41.106 real 0m4.323s 00:08:41.106 user 0m4.492s 00:08:41.106 sys 0m0.533s 00:08:41.106 10:45:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.106 10:45:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:41.106 10:45:29 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:41.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:41.624 Waiting for block devices as requested 00:08:41.624 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.883 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.883 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:41.883 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.225 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:47.225 10:45:36 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:47.225 10:45:36 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:47.225 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:47.225 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:47.225 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:47.225 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:47.225 10:45:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:47.225 00:08:47.225 real 1m5.664s 00:08:47.225 user 1m21.937s 00:08:47.225 sys 0m12.170s 00:08:47.225 10:45:36 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.225 ************************************ 00:08:47.225 10:45:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:47.225 END TEST blockdev_nvme_gpt 00:08:47.225 ************************************ 00:08:47.484 10:45:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:47.484 10:45:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:47.484 10:45:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.484 10:45:36 -- common/autotest_common.sh@10 -- # set +x 00:08:47.484 ************************************ 00:08:47.484 START TEST nvme 00:08:47.484 ************************************ 00:08:47.484 10:45:36 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:47.484 * Looking for test storage... 00:08:47.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:47.484 10:45:36 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.484 10:45:36 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.484 10:45:36 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.484 10:45:36 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.484 10:45:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.484 10:45:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.484 10:45:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.484 10:45:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.484 10:45:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.484 10:45:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.484 10:45:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.484 10:45:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.484 10:45:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.484 10:45:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.484 10:45:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.484 10:45:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:47.484 10:45:36 nvme -- scripts/common.sh@345 -- # : 1 00:08:47.743 10:45:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.743 10:45:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.743 10:45:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:47.744 10:45:36 nvme -- scripts/common.sh@353 -- # local d=1 00:08:47.744 10:45:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.744 10:45:36 nvme -- scripts/common.sh@355 -- # echo 1 00:08:47.744 10:45:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.744 10:45:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:47.744 10:45:36 nvme -- scripts/common.sh@353 -- # local d=2 00:08:47.744 10:45:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.744 10:45:36 nvme -- scripts/common.sh@355 -- # echo 2 00:08:47.744 10:45:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.744 10:45:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.744 10:45:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.744 10:45:36 nvme -- scripts/common.sh@368 -- # return 0 00:08:47.744 10:45:36 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.744 10:45:36 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.744 --rc genhtml_branch_coverage=1 00:08:47.744 --rc genhtml_function_coverage=1 00:08:47.744 --rc genhtml_legend=1 00:08:47.744 --rc geninfo_all_blocks=1 00:08:47.744 --rc geninfo_unexecuted_blocks=1 00:08:47.744 00:08:47.744 ' 00:08:47.744 10:45:36 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.744 --rc genhtml_branch_coverage=1 00:08:47.744 --rc genhtml_function_coverage=1 00:08:47.744 --rc genhtml_legend=1 00:08:47.744 --rc geninfo_all_blocks=1 00:08:47.744 --rc geninfo_unexecuted_blocks=1 00:08:47.744 00:08:47.744 ' 00:08:47.744 10:45:36 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.744 --rc genhtml_branch_coverage=1 00:08:47.744 --rc genhtml_function_coverage=1 00:08:47.744 --rc genhtml_legend=1 00:08:47.744 --rc geninfo_all_blocks=1 00:08:47.744 --rc geninfo_unexecuted_blocks=1 00:08:47.744 00:08:47.744 ' 00:08:47.744 10:45:36 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.744 --rc genhtml_branch_coverage=1 00:08:47.744 --rc genhtml_function_coverage=1 00:08:47.744 --rc genhtml_legend=1 00:08:47.744 --rc geninfo_all_blocks=1 00:08:47.744 --rc geninfo_unexecuted_blocks=1 00:08:47.744 00:08:47.744 ' 00:08:47.744 10:45:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:48.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.250 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.250 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.250 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.250 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:49.250 10:45:38 nvme -- nvme/nvme.sh@79 -- # uname 00:08:49.250 10:45:38 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:49.250 10:45:38 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:49.250 10:45:38 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:49.250 Waiting for stub to ready for secondary processes... 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1075 -- # stubpid=63929 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63929 ]] 00:08:49.250 10:45:38 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:49.250 [2024-11-20 10:45:38.429771] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:08:49.250 [2024-11-20 10:45:38.429893] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:50.185 10:45:39 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:50.185 10:45:39 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63929 ]] 00:08:50.185 10:45:39 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:50.445 [2024-11-20 10:45:39.449315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.445 [2024-11-20 10:45:39.557924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.445 [2024-11-20 10:45:39.558060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.445 [2024-11-20 10:45:39.558093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.445 [2024-11-20 10:45:39.575462] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:50.445 [2024-11-20 10:45:39.575494] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:50.445 [2024-11-20 10:45:39.592989] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:50.445 [2024-11-20 10:45:39.593131] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:50.445 [2024-11-20 10:45:39.596051] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:50.445 [2024-11-20 10:45:39.596252] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:50.445 [2024-11-20 10:45:39.596327] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:50.445 [2024-11-20 10:45:39.599164] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:50.445 [2024-11-20 10:45:39.599416] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:50.445 [2024-11-20 10:45:39.599495] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:50.445 [2024-11-20 10:45:39.602424] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:50.445 [2024-11-20 10:45:39.602874] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:50.445 [2024-11-20 10:45:39.602947] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:50.445 [2024-11-20 10:45:39.602997] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:50.445 [2024-11-20 10:45:39.603043] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:51.382 done. 00:08:51.382 10:45:40 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:51.382 10:45:40 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:51.382 10:45:40 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:51.382 10:45:40 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:51.382 10:45:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.382 10:45:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.382 ************************************ 00:08:51.382 START TEST nvme_reset 00:08:51.382 ************************************ 00:08:51.382 10:45:40 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:51.642 Initializing NVMe Controllers 00:08:51.642 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:51.642 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:51.642 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:51.642 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:51.642 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:51.642 00:08:51.642 real 0m0.300s 00:08:51.642 user 0m0.099s 00:08:51.642 sys 0m0.155s 00:08:51.642 10:45:40 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.642 ************************************ 00:08:51.642 END TEST nvme_reset 00:08:51.642 ************************************ 00:08:51.642 10:45:40 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:51.642 10:45:40 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:51.642 10:45:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.642 10:45:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.642 10:45:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.642 ************************************ 00:08:51.642 START TEST nvme_identify 00:08:51.642 ************************************ 00:08:51.642 10:45:40 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:51.642 10:45:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:51.642 10:45:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:51.642 10:45:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:51.642 10:45:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:51.642 10:45:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:51.642 10:45:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:51.642 10:45:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:51.642 10:45:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:51.642 10:45:40 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:51.642 10:45:40 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:51.642 10:45:40 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:51.642 10:45:40 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:52.163 [2024-11-20 10:45:41.152514] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63963 termina===================================================== 00:08:52.163 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.163 ===================================================== 00:08:52.163 Controller Capabilities/Features 00:08:52.163 ================================ 00:08:52.163 Vendor ID: 1b36 00:08:52.163 Subsystem Vendor ID: 1af4 00:08:52.163 Serial Number: 12340 00:08:52.163 Model Number: QEMU NVMe Ctrl 00:08:52.163 Firmware Version: 8.0.0 00:08:52.163 Recommended Arb Burst: 6 00:08:52.163 IEEE OUI Identifier: 00 54 52 00:08:52.163 Multi-path I/O 00:08:52.163 May have multiple subsystem ports: No 00:08:52.163 May have multiple controllers: No 00:08:52.163 Associated with SR-IOV VF: No 00:08:52.163 Max Data Transfer Size: 524288 00:08:52.163 Max Number of Namespaces: 256 00:08:52.163 Max Number of I/O Queues: 64 00:08:52.163 NVMe Specification Version (VS): 1.4 00:08:52.163 NVMe Specification Version (Identify): 1.4 00:08:52.163 Maximum Queue Entries: 2048 00:08:52.163 Contiguous Queues Required: Yes 00:08:52.163 Arbitration Mechanisms Supported 00:08:52.163 Weighted Round Robin: Not Supported 00:08:52.163 Vendor Specific: Not Supported 00:08:52.163 Reset Timeout: 7500 ms 00:08:52.163 Doorbell Stride: 4 bytes 00:08:52.163 NVM Subsystem Reset: Not Supported 00:08:52.163 Command Sets Supported 00:08:52.163 NVM Command Set: Supported 00:08:52.163 Boot Partition: Not Supported 00:08:52.163 Memory Page Size Minimum: 4096 bytes 00:08:52.163 Memory Page Size Maximum: 65536 bytes 00:08:52.163 Persistent Memory Region: Not Supported 00:08:52.163 Optional Asynchronous Events Supported 00:08:52.163 Namespace Attribute Notices: Supported 00:08:52.163 Firmware Activation Notices: Not Supported 00:08:52.163 ANA Change Notices: Not Supported 00:08:52.163 PLE Aggregate Log Change Notices: Not Supported 00:08:52.163 LBA Status Info Alert Notices: Not Supported 00:08:52.163 EGE Aggregate Log Change Notices: Not Supported 00:08:52.163 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.163 Zone Descriptor Change Notices: Not Supported 00:08:52.163 Discovery Log Change Notices: Not Supported 00:08:52.163 Controller Attributes 00:08:52.163 128-bit Host Identifier: Not Supported 00:08:52.163 Non-Operational Permissive Mode: Not Supported 00:08:52.163 NVM Sets: Not Supported 00:08:52.163 Read Recovery Levels: Not Supported 00:08:52.163 Endurance Groups: Not Supported 00:08:52.163 Predictable Latency Mode: Not Supported 00:08:52.163 Traffic Based Keep ALive: Not Supported 00:08:52.163 Namespace Granularity: Not Supported 00:08:52.163 SQ Associations: Not Supported 00:08:52.163 UUID List: Not Supported 00:08:52.163 Multi-Domain Subsystem: Not Supported 00:08:52.163 Fixed Capacity Management: Not Supported 00:08:52.163 Variable Capacity Management: Not Supported 00:08:52.163 Delete Endurance Group: Not Supported 00:08:52.163 Delete NVM Set: Not Supported 00:08:52.163 Extended LBA Formats Supported: Supported 00:08:52.163 Flexible Data Placement Supported: Not Supported 00:08:52.163 00:08:52.163 Controller Memory Buffer Support 00:08:52.163 ================================ 00:08:52.163 Supported: No 00:08:52.163 00:08:52.163 Persistent Memory Region Support 00:08:52.163 ================================ 00:08:52.163 Supported: No 00:08:52.163 00:08:52.163 Admin Command Set Attributes 00:08:52.163 ============================ 00:08:52.163 Security Send/Receive: Not Supported 00:08:52.163 Format NVM: Supported 00:08:52.163 Firmware Activate/Download: Not Supported 00:08:52.163 Namespace Management: Supported 00:08:52.163 Device Self-Test: Not Supported 00:08:52.163 Directives: Supported 00:08:52.163 NVMe-MI: Not Supported 00:08:52.163 Virtualization Management: Not Supported 00:08:52.163 Doorbell Buffer Config: Supported 00:08:52.163 Get LBA Status Capability: Not Supported 00:08:52.163 Command & Feature Lockdown Capability: Not Supported 00:08:52.163 Abort Command Limit: 4 00:08:52.163 Async Event Request Limit: 4 00:08:52.163 Number of Firmware Slots: N/A 00:08:52.163 Firmware Slot 1 Read-Only: N/A 00:08:52.163 Firmware Activation Without Reset: N/A 00:08:52.163 Multiple Update Detection Support: N/A 00:08:52.163 Firmware Update Granularity: No Information Provided 00:08:52.163 Per-Namespace SMART Log: Yes 00:08:52.163 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.163 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:52.163 Command Effects Log Page: Supported 00:08:52.163 Get Log Page Extended Data: Supported 00:08:52.163 Telemetry Log Pages: Not Supported 00:08:52.163 Persistent Event Log Pages: Not Supported 00:08:52.163 Supported Log Pages Log Page: May Support 00:08:52.163 Commands Supported & Effects Log Page: Not Supported 00:08:52.163 Feature Identifiers & Effects Log Page:May Support 00:08:52.163 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.163 Data Area 4 for Telemetry Log: Not Supported 00:08:52.163 Error Log Page Entries Supported: 1 00:08:52.163 Keep Alive: Not Supported 00:08:52.163 00:08:52.163 NVM Command Set Attributes 00:08:52.163 ========================== 00:08:52.163 Submission Queue Entry Size 00:08:52.163 Max: 64 00:08:52.163 Min: 64 00:08:52.163 Completion Queue Entry Size 00:08:52.163 Max: 16 00:08:52.163 Min: 16 00:08:52.163 Number of Namespaces: 256 00:08:52.163 Compare Command: Supported 00:08:52.163 Write Uncorrectable Command: Not Supported 00:08:52.163 Dataset Management Command: Supported 00:08:52.163 Write Zeroes Command: Supported 00:08:52.163 Set Features Save Field: Supported 00:08:52.163 Reservations: Not Supported 00:08:52.163 Timestamp: Supported 00:08:52.163 Copy: Supported 00:08:52.163 Volatile Write Cache: Present 00:08:52.163 Atomic Write Unit (Normal): 1 00:08:52.163 Atomic Write Unit (PFail): 1 00:08:52.163 Atomic Compare & Write Unit: 1 00:08:52.163 Fused Compare & Write: Not Supported 00:08:52.163 Scatter-Gather List 00:08:52.163 SGL Command Set: Supported 00:08:52.163 SGL Keyed: Not Supported 00:08:52.163 SGL Bit Bucket Descriptor: Not Supported 00:08:52.163 SGL Metadata Pointer: Not Supported 00:08:52.163 Oversized SGL: Not Supported 00:08:52.163 SGL Metadata Address: Not Supported 00:08:52.163 SGL Offset: Not Supported 00:08:52.163 Transport SGL Data Block: Not Supported 00:08:52.163 Replay Protected Memory Block: Not Supported 00:08:52.163 00:08:52.163 Firmware Slot Information 00:08:52.163 ========================= 00:08:52.163 Active slot: 1 00:08:52.163 Slot 1 Firmware Revision: 1.0 00:08:52.163 00:08:52.163 00:08:52.163 Commands Supported and Effects 00:08:52.163 ============================== 00:08:52.163 Admin Commands 00:08:52.163 -------------- 00:08:52.163 Delete I/O Submission Queue (00h): Supported 00:08:52.163 Create I/O Submission Queue (01h): Supported 00:08:52.163 Get Log Page (02h): Supported 00:08:52.163 Delete I/O Completion Queue (04h): Supported 00:08:52.163 Create I/O Completion Queue (05h): Supported 00:08:52.163 Identify (06h): Supported 00:08:52.163 Abort (08h): Supported 00:08:52.163 Set Features (09h): Supported 00:08:52.163 Get Features (0Ah): Supported 00:08:52.163 Asynchronous Event Request (0Ch): Supported 00:08:52.163 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.163 Directive Send (19h): Supported 00:08:52.163 Directive Receive (1Ah): Supported 00:08:52.163 Virtualization Management (1Ch): Supported 00:08:52.163 Doorbell Buffer Config (7Ch): Supported 00:08:52.163 Format NVM (80h): Supported LBA-Change 00:08:52.163 I/O Commands 00:08:52.163 ------------ 00:08:52.163 Flush (00h): Supported LBA-Change 00:08:52.163 Write (01h): Supported LBA-Change 00:08:52.163 Read (02h): Supported 00:08:52.163 Compare (05h): Supported 00:08:52.163 Write Zeroes (08h): Supported LBA-Change 00:08:52.163 Dataset Management (09h): Supported LBA-Change 00:08:52.163 Unknown (0Ch): Supported 00:08:52.163 Unknown (12h): Supported 00:08:52.163 Copy (19h): Supported LBA-Change 00:08:52.163 Unknown (1Dh): Supported LBA-Change 00:08:52.163 00:08:52.163 Error Log 00:08:52.163 ========= 00:08:52.163 00:08:52.163 Arbitration 00:08:52.163 =========== 00:08:52.163 Arbitration Burst: no limit 00:08:52.163 00:08:52.163 Power Management 00:08:52.163 ================ 00:08:52.163 Number of Power States: 1 00:08:52.163 Current Power State: Power State #0 00:08:52.163 Power State #0: 00:08:52.163 Max Power: 25.00 W 00:08:52.163 Non-Operational State: Operational 00:08:52.163 Entry Latency: 16 microseconds 00:08:52.164 Exit Latency: 4 microseconds 00:08:52.164 Relative Read Throughput: 0 00:08:52.164 Relative Read Latency: 0 00:08:52.164 Relative Write Throughput: 0 00:08:52.164 Relative Write Latency: 0 00:08:52.164 Idle Powerted unexpected 00:08:52.164 [2024-11-20 10:45:41.153610] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63963 terminated unexpected 00:08:52.164 : Not Reported 00:08:52.164 Active Power: Not Reported 00:08:52.164 Non-Operational Permissive Mode: Not Supported 00:08:52.164 00:08:52.164 Health Information 00:08:52.164 ================== 00:08:52.164 Critical Warnings: 00:08:52.164 Available Spare Space: OK 00:08:52.164 Temperature: OK 00:08:52.164 Device Reliability: OK 00:08:52.164 Read Only: No 00:08:52.164 Volatile Memory Backup: OK 00:08:52.164 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.164 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.164 Available Spare: 0% 00:08:52.164 Available Spare Threshold: 0% 00:08:52.164 Life Percentage Used: 0% 00:08:52.164 Data Units Read: 767 00:08:52.164 Data Units Written: 695 00:08:52.164 Host Read Commands: 37869 00:08:52.164 Host Write Commands: 37655 00:08:52.164 Controller Busy Time: 0 minutes 00:08:52.164 Power Cycles: 0 00:08:52.164 Power On Hours: 0 hours 00:08:52.164 Unsafe Shutdowns: 0 00:08:52.164 Unrecoverable Media Errors: 0 00:08:52.164 Lifetime Error Log Entries: 0 00:08:52.164 Warning Temperature Time: 0 minutes 00:08:52.164 Critical Temperature Time: 0 minutes 00:08:52.164 00:08:52.164 Number of Queues 00:08:52.164 ================ 00:08:52.164 Number of I/O Submission Queues: 64 00:08:52.164 Number of I/O Completion Queues: 64 00:08:52.164 00:08:52.164 ZNS Specific Controller Data 00:08:52.164 ============================ 00:08:52.164 Zone Append Size Limit: 0 00:08:52.164 00:08:52.164 00:08:52.164 Active Namespaces 00:08:52.164 ================= 00:08:52.164 Namespace ID:1 00:08:52.164 Error Recovery Timeout: Unlimited 00:08:52.164 Command Set Identifier: NVM (00h) 00:08:52.164 Deallocate: Supported 00:08:52.164 Deallocated/Unwritten Error: Supported 00:08:52.164 Deallocated Read Value: All 0x00 00:08:52.164 Deallocate in Write Zeroes: Not Supported 00:08:52.164 Deallocated Guard Field: 0xFFFF 00:08:52.164 Flush: Supported 00:08:52.164 Reservation: Not Supported 00:08:52.164 Metadata Transferred as: Separate Metadata Buffer 00:08:52.164 Namespace Sharing Capabilities: Private 00:08:52.164 Size (in LBAs): 1548666 (5GiB) 00:08:52.164 Capacity (in LBAs): 1548666 (5GiB) 00:08:52.164 Utilization (in LBAs): 1548666 (5GiB) 00:08:52.164 Thin Provisioning: Not Supported 00:08:52.164 Per-NS Atomic Units: No 00:08:52.164 Maximum Single Source Range Length: 128 00:08:52.164 Maximum Copy Length: 128 00:08:52.164 Maximum Source Range Count: 128 00:08:52.164 NGUID/EUI64 Never Reused: No 00:08:52.164 Namespace Write Protected: No 00:08:52.164 Number of LBA Formats: 8 00:08:52.164 Current LBA Format: LBA Format #07 00:08:52.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.164 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.164 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.164 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.164 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.164 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.164 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.164 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.164 00:08:52.164 NVM Specific Namespace Data 00:08:52.164 =========================== 00:08:52.164 Logical Block Storage Tag Mask: 0 00:08:52.164 Protection Information Capabilities: 00:08:52.164 16b Guard Protection Information Storage Tag Support: No 00:08:52.164 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.164 Storage Tag Check Read Support: No 00:08:52.164 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.164 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.164 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.164 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.164 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.164 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.164 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.164 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.164 ===================================================== 00:08:52.164 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.164 ===================================================== 00:08:52.164 Controller Capabilities/Features 00:08:52.164 ================================ 00:08:52.164 Vendor ID: 1b36 00:08:52.164 Subsystem Vendor ID: 1af4 00:08:52.164 Serial Number: 12341 00:08:52.164 Model Number: QEMU NVMe Ctrl 00:08:52.164 Firmware Version: 8.0.0 00:08:52.164 Recommended Arb Burst: 6 00:08:52.164 IEEE OUI Identifier: 00 54 52 00:08:52.164 Multi-path I/O 00:08:52.164 May have multiple subsystem ports: No 00:08:52.164 May have multiple controllers: No 00:08:52.164 Associated with SR-IOV VF: No 00:08:52.164 Max Data Transfer Size: 524288 00:08:52.164 Max Number of Namespaces: 256 00:08:52.164 Max Number of I/O Queues: 64 00:08:52.164 NVMe Specification Version (VS): 1.4 00:08:52.164 NVMe Specification Version (Identify): 1.4 00:08:52.164 Maximum Queue Entries: 2048 00:08:52.164 Contiguous Queues Required: Yes 00:08:52.164 Arbitration Mechanisms Supported 00:08:52.164 Weighted Round Robin: Not Supported 00:08:52.164 Vendor Specific: Not Supported 00:08:52.164 Reset Timeout: 7500 ms 00:08:52.164 Doorbell Stride: 4 bytes 00:08:52.164 NVM Subsystem Reset: Not Supported 00:08:52.164 Command Sets Supported 00:08:52.164 NVM Command Set: Supported 00:08:52.164 Boot Partition: Not Supported 00:08:52.164 Memory Page Size Minimum: 4096 bytes 00:08:52.164 Memory Page Size Maximum: 65536 bytes 00:08:52.164 Persistent Memory Region: Not Supported 00:08:52.164 Optional Asynchronous Events Supported 00:08:52.164 Namespace Attribute Notices: Supported 00:08:52.164 Firmware Activation Notices: Not Supported 00:08:52.164 ANA Change Notices: Not Supported 00:08:52.164 PLE Aggregate Log Change Notices: Not Supported 00:08:52.164 LBA Status Info Alert Notices: Not Supported 00:08:52.164 EGE Aggregate Log Change Notices: Not Supported 00:08:52.164 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.164 Zone Descriptor Change Notices: Not Supported 00:08:52.164 Discovery Log Change Notices: Not Supported 00:08:52.164 Controller Attributes 00:08:52.164 128-bit Host Identifier: Not Supported 00:08:52.164 Non-Operational Permissive Mode: Not Supported 00:08:52.164 NVM Sets: Not Supported 00:08:52.164 Read Recovery Levels: Not Supported 00:08:52.164 Endurance Groups: Not Supported 00:08:52.164 Predictable Latency Mode: Not Supported 00:08:52.164 Traffic Based Keep ALive: Not Supported 00:08:52.164 Namespace Granularity: Not Supported 00:08:52.164 SQ Associations: Not Supported 00:08:52.164 UUID List: Not Supported 00:08:52.164 Multi-Domain Subsystem: Not Supported 00:08:52.164 Fixed Capacity Management: Not Supported 00:08:52.164 Variable Capacity Management: Not Supported 00:08:52.164 Delete Endurance Group: Not Supported 00:08:52.164 Delete NVM Set: Not Supported 00:08:52.164 Extended LBA Formats Supported: Supported 00:08:52.164 Flexible Data Placement Supported: Not Supported 00:08:52.164 00:08:52.164 Controller Memory Buffer Support 00:08:52.164 ================================ 00:08:52.164 Supported: No 00:08:52.164 00:08:52.164 Persistent Memory Region Support 00:08:52.164 ================================ 00:08:52.164 Supported: No 00:08:52.164 00:08:52.164 Admin Command Set Attributes 00:08:52.164 ============================ 00:08:52.164 Security Send/Receive: Not Supported 00:08:52.164 Format NVM: Supported 00:08:52.164 Firmware Activate/Download: Not Supported 00:08:52.164 Namespace Management: Supported 00:08:52.164 Device Self-Test: Not Supported 00:08:52.164 Directives: Supported 00:08:52.164 NVMe-MI: Not Supported 00:08:52.164 Virtualization Management: Not Supported 00:08:52.164 Doorbell Buffer Config: Supported 00:08:52.164 Get LBA Status Capability: Not Supported 00:08:52.164 Command & Feature Lockdown Capability: Not Supported 00:08:52.164 Abort Command Limit: 4 00:08:52.164 Async Event Request Limit: 4 00:08:52.164 Number of Firmware Slots: N/A 00:08:52.164 Firmware Slot 1 Read-Only: N/A 00:08:52.164 Firmware Activation Without Reset: N/A 00:08:52.164 Multiple Update Detection Support: N/A 00:08:52.164 Firmware Update Granularity: No Information Provided 00:08:52.164 Per-Namespace SMART Log: Yes 00:08:52.164 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.164 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:52.164 Command Effects Log Page: Supported 00:08:52.165 Get Log Page Extended Data: Supported 00:08:52.165 Telemetry Log Pages: Not Supported 00:08:52.165 Persistent Event Log Pages: Not Supported 00:08:52.165 Supported Log Pages Log Page: May Support 00:08:52.165 Commands Supported & Effects Log Page: Not Supported 00:08:52.165 Feature Identifiers & Effects Log Page:May Support 00:08:52.165 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.165 Data Area 4 for Telemetry Log: Not Supported 00:08:52.165 Error Log Page Entries Supported: 1 00:08:52.165 Keep Alive: Not Supported 00:08:52.165 00:08:52.165 NVM Command Set Attributes 00:08:52.165 ========================== 00:08:52.165 Submission Queue Entry Size 00:08:52.165 Max: 64 00:08:52.165 Min: 64 00:08:52.165 Completion Queue Entry Size 00:08:52.165 Max: 16 00:08:52.165 Min: 16 00:08:52.165 Number of Namespaces: 256 00:08:52.165 Compare Command: Supported 00:08:52.165 Write Uncorrectable Command: Not Supported 00:08:52.165 Dataset Management Command: Supported 00:08:52.165 Write Zeroes Command: Supported 00:08:52.165 Set Features Save Field: Supported 00:08:52.165 Reservations: Not Supported 00:08:52.165 Timestamp: Supported 00:08:52.165 Copy: Supported 00:08:52.165 Volatile Write Cache: Present 00:08:52.165 Atomic Write Unit (Normal): 1 00:08:52.165 Atomic Write Unit (PFail): 1 00:08:52.165 Atomic Compare & Write Unit: 1 00:08:52.165 Fused Compare & Write: Not Supported 00:08:52.165 Scatter-Gather List 00:08:52.165 SGL Command Set: Supported 00:08:52.165 SGL Keyed: Not Supported 00:08:52.165 SGL Bit Bucket Descriptor: Not Supported 00:08:52.165 SGL Metadata Pointer: Not Supported 00:08:52.165 Oversized SGL: Not Supported 00:08:52.165 SGL Metadata Address: Not Supported 00:08:52.165 SGL Offset: Not Supported 00:08:52.165 Transport SGL Data Block: Not Supported 00:08:52.165 Replay Protected Memory Block: Not Supported 00:08:52.165 00:08:52.165 Firmware Slot Information 00:08:52.165 ========================= 00:08:52.165 Active slot: 1 00:08:52.165 Slot 1 Firmware Revision: 1.0 00:08:52.165 00:08:52.165 00:08:52.165 Commands Supported and Effects 00:08:52.165 ============================== 00:08:52.165 Admin Commands 00:08:52.165 -------------- 00:08:52.165 Delete I/O Submission Queue (00h): Supported 00:08:52.165 Create I/O Submission Queue (01h): Supported 00:08:52.165 Get Log Page (02h): Supported 00:08:52.165 Delete I/O Completion Queue (04h): Supported 00:08:52.165 Create I/O Completion Queue (05h): Supported 00:08:52.165 Identify (06h): Supported 00:08:52.165 Abort (08h): Supported 00:08:52.165 Set Features (09h): Supported 00:08:52.165 Get Features (0Ah): Supported 00:08:52.165 Asynchronous Event Request (0Ch): Supported 00:08:52.165 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.165 Directive Send (19h): Supported 00:08:52.165 Directive Receive (1Ah): Supported 00:08:52.165 Virtualization Management (1Ch): Supported 00:08:52.165 Doorbell Buffer Config (7Ch): Supported 00:08:52.165 Format NVM (80h): Supported LBA-Change 00:08:52.165 I/O Commands 00:08:52.165 ------------ 00:08:52.165 Flush (00h): Supported LBA-Change 00:08:52.165 Write (01h): Supported LBA-Change 00:08:52.165 Read (02h): Supported 00:08:52.165 Compare (05h): Supported 00:08:52.165 Write Zeroes (08h): Supported LBA-Change 00:08:52.165 Dataset Management (09h): Supported LBA-Change 00:08:52.165 Unknown (0Ch): Supported 00:08:52.165 Unknown (12h): Supported 00:08:52.165 Copy (19h): Supported LBA-Change 00:08:52.165 Unknown (1Dh): Supported LBA-Change 00:08:52.165 00:08:52.165 Error Log 00:08:52.165 ========= 00:08:52.165 00:08:52.165 Arbitration 00:08:52.165 =========== 00:08:52.165 Arbitration Burst: no limit 00:08:52.165 00:08:52.165 Power Management 00:08:52.165 ================ 00:08:52.165 Number of Power States: 1 00:08:52.165 Current Power State: Power State #0 00:08:52.165 Power State #0: 00:08:52.165 Max Power: 25.00 W 00:08:52.165 Non-Operational State: Operational 00:08:52.165 Entry Latency: 16 microseconds 00:08:52.165 Exit Latency: 4 microseconds 00:08:52.165 Relative Read Throughput: 0 00:08:52.165 Relative Read Latency: 0 00:08:52.165 Relative Write Throughput: 0 00:08:52.165 Relative Write Latency: 0 00:08:52.165 Idle Power: Not Reported 00:08:52.165 Active Power: Not Reported 00:08:52.165 Non-Operational Permissive Mode: Not Supported 00:08:52.165 00:08:52.165 Health Information 00:08:52.165 ================== 00:08:52.165 Critical Warnings: 00:08:52.165 Available Spare Space: OK 00:08:52.165 Temperature: [2024-11-20 10:45:41.154522] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63963 terminated unexpected 00:08:52.165 OK 00:08:52.165 Device Reliability: OK 00:08:52.165 Read Only: No 00:08:52.165 Volatile Memory Backup: OK 00:08:52.165 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.165 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.165 Available Spare: 0% 00:08:52.165 Available Spare Threshold: 0% 00:08:52.165 Life Percentage Used: 0% 00:08:52.165 Data Units Read: 1171 00:08:52.165 Data Units Written: 1032 00:08:52.165 Host Read Commands: 55649 00:08:52.165 Host Write Commands: 54335 00:08:52.165 Controller Busy Time: 0 minutes 00:08:52.165 Power Cycles: 0 00:08:52.165 Power On Hours: 0 hours 00:08:52.165 Unsafe Shutdowns: 0 00:08:52.165 Unrecoverable Media Errors: 0 00:08:52.165 Lifetime Error Log Entries: 0 00:08:52.165 Warning Temperature Time: 0 minutes 00:08:52.165 Critical Temperature Time: 0 minutes 00:08:52.165 00:08:52.165 Number of Queues 00:08:52.165 ================ 00:08:52.165 Number of I/O Submission Queues: 64 00:08:52.165 Number of I/O Completion Queues: 64 00:08:52.165 00:08:52.165 ZNS Specific Controller Data 00:08:52.165 ============================ 00:08:52.165 Zone Append Size Limit: 0 00:08:52.165 00:08:52.165 00:08:52.165 Active Namespaces 00:08:52.165 ================= 00:08:52.165 Namespace ID:1 00:08:52.165 Error Recovery Timeout: Unlimited 00:08:52.165 Command Set Identifier: NVM (00h) 00:08:52.165 Deallocate: Supported 00:08:52.165 Deallocated/Unwritten Error: Supported 00:08:52.165 Deallocated Read Value: All 0x00 00:08:52.165 Deallocate in Write Zeroes: Not Supported 00:08:52.165 Deallocated Guard Field: 0xFFFF 00:08:52.165 Flush: Supported 00:08:52.165 Reservation: Not Supported 00:08:52.165 Namespace Sharing Capabilities: Private 00:08:52.165 Size (in LBAs): 1310720 (5GiB) 00:08:52.165 Capacity (in LBAs): 1310720 (5GiB) 00:08:52.165 Utilization (in LBAs): 1310720 (5GiB) 00:08:52.165 Thin Provisioning: Not Supported 00:08:52.165 Per-NS Atomic Units: No 00:08:52.165 Maximum Single Source Range Length: 128 00:08:52.165 Maximum Copy Length: 128 00:08:52.165 Maximum Source Range Count: 128 00:08:52.165 NGUID/EUI64 Never Reused: No 00:08:52.165 Namespace Write Protected: No 00:08:52.165 Number of LBA Formats: 8 00:08:52.165 Current LBA Format: LBA Format #04 00:08:52.165 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.165 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.165 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.165 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.165 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.165 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.165 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.165 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.165 00:08:52.165 NVM Specific Namespace Data 00:08:52.165 =========================== 00:08:52.165 Logical Block Storage Tag Mask: 0 00:08:52.165 Protection Information Capabilities: 00:08:52.165 16b Guard Protection Information Storage Tag Support: No 00:08:52.165 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.165 Storage Tag Check Read Support: No 00:08:52.165 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 ===================================================== 00:08:52.165 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.165 ===================================================== 00:08:52.165 Controller Capabilities/Features 00:08:52.165 ================================ 00:08:52.165 Vendor ID: 1b36 00:08:52.165 Subsystem Vendor ID: 1af4 00:08:52.165 Serial Number: 12343 00:08:52.165 Model Number: QEMU NVMe Ctrl 00:08:52.165 Firmware Version: 8.0.0 00:08:52.166 Recommended Arb Burst: 6 00:08:52.166 IEEE OUI Identifier: 00 54 52 00:08:52.166 Multi-path I/O 00:08:52.166 May have multiple subsystem ports: No 00:08:52.166 May have multiple controllers: Yes 00:08:52.166 Associated with SR-IOV VF: No 00:08:52.166 Max Data Transfer Size: 524288 00:08:52.166 Max Number of Namespaces: 256 00:08:52.166 Max Number of I/O Queues: 64 00:08:52.166 NVMe Specification Version (VS): 1.4 00:08:52.166 NVMe Specification Version (Identify): 1.4 00:08:52.166 Maximum Queue Entries: 2048 00:08:52.166 Contiguous Queues Required: Yes 00:08:52.166 Arbitration Mechanisms Supported 00:08:52.166 Weighted Round Robin: Not Supported 00:08:52.166 Vendor Specific: Not Supported 00:08:52.166 Reset Timeout: 7500 ms 00:08:52.166 Doorbell Stride: 4 bytes 00:08:52.166 NVM Subsystem Reset: Not Supported 00:08:52.166 Command Sets Supported 00:08:52.166 NVM Command Set: Supported 00:08:52.166 Boot Partition: Not Supported 00:08:52.166 Memory Page Size Minimum: 4096 bytes 00:08:52.166 Memory Page Size Maximum: 65536 bytes 00:08:52.166 Persistent Memory Region: Not Supported 00:08:52.166 Optional Asynchronous Events Supported 00:08:52.166 Namespace Attribute Notices: Supported 00:08:52.166 Firmware Activation Notices: Not Supported 00:08:52.166 ANA Change Notices: Not Supported 00:08:52.166 PLE Aggregate Log Change Notices: Not Supported 00:08:52.166 LBA Status Info Alert Notices: Not Supported 00:08:52.166 EGE Aggregate Log Change Notices: Not Supported 00:08:52.166 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.166 Zone Descriptor Change Notices: Not Supported 00:08:52.166 Discovery Log Change Notices: Not Supported 00:08:52.166 Controller Attributes 00:08:52.166 128-bit Host Identifier: Not Supported 00:08:52.166 Non-Operational Permissive Mode: Not Supported 00:08:52.166 NVM Sets: Not Supported 00:08:52.166 Read Recovery Levels: Not Supported 00:08:52.166 Endurance Groups: Supported 00:08:52.166 Predictable Latency Mode: Not Supported 00:08:52.166 Traffic Based Keep ALive: Not Supported 00:08:52.166 Namespace Granularity: Not Supported 00:08:52.166 SQ Associations: Not Supported 00:08:52.166 UUID List: Not Supported 00:08:52.166 Multi-Domain Subsystem: Not Supported 00:08:52.166 Fixed Capacity Management: Not Supported 00:08:52.166 Variable Capacity Management: Not Supported 00:08:52.166 Delete Endurance Group: Not Supported 00:08:52.166 Delete NVM Set: Not Supported 00:08:52.166 Extended LBA Formats Supported: Supported 00:08:52.166 Flexible Data Placement Supported: Supported 00:08:52.166 00:08:52.166 Controller Memory Buffer Support 00:08:52.166 ================================ 00:08:52.166 Supported: No 00:08:52.166 00:08:52.166 Persistent Memory Region Support 00:08:52.166 ================================ 00:08:52.166 Supported: No 00:08:52.166 00:08:52.166 Admin Command Set Attributes 00:08:52.166 ============================ 00:08:52.166 Security Send/Receive: Not Supported 00:08:52.166 Format NVM: Supported 00:08:52.166 Firmware Activate/Download: Not Supported 00:08:52.166 Namespace Management: Supported 00:08:52.166 Device Self-Test: Not Supported 00:08:52.166 Directives: Supported 00:08:52.166 NVMe-MI: Not Supported 00:08:52.166 Virtualization Management: Not Supported 00:08:52.166 Doorbell Buffer Config: Supported 00:08:52.166 Get LBA Status Capability: Not Supported 00:08:52.166 Command & Feature Lockdown Capability: Not Supported 00:08:52.166 Abort Command Limit: 4 00:08:52.166 Async Event Request Limit: 4 00:08:52.166 Number of Firmware Slots: N/A 00:08:52.166 Firmware Slot 1 Read-Only: N/A 00:08:52.166 Firmware Activation Without Reset: N/A 00:08:52.166 Multiple Update Detection Support: N/A 00:08:52.166 Firmware Update Granularity: No Information Provided 00:08:52.166 Per-Namespace SMART Log: Yes 00:08:52.166 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.166 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:52.166 Command Effects Log Page: Supported 00:08:52.166 Get Log Page Extended Data: Supported 00:08:52.166 Telemetry Log Pages: Not Supported 00:08:52.166 Persistent Event Log Pages: Not Supported 00:08:52.166 Supported Log Pages Log Page: May Support 00:08:52.166 Commands Supported & Effects Log Page: Not Supported 00:08:52.166 Feature Identifiers & Effects Log Page:May Support 00:08:52.166 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.166 Data Area 4 for Telemetry Log: Not Supported 00:08:52.166 Error Log Page Entries Supported: 1 00:08:52.166 Keep Alive: Not Supported 00:08:52.166 00:08:52.166 NVM Command Set Attributes 00:08:52.166 ========================== 00:08:52.166 Submission Queue Entry Size 00:08:52.166 Max: 64 00:08:52.166 Min: 64 00:08:52.166 Completion Queue Entry Size 00:08:52.166 Max: 16 00:08:52.166 Min: 16 00:08:52.166 Number of Namespaces: 256 00:08:52.166 Compare Command: Supported 00:08:52.166 Write Uncorrectable Command: Not Supported 00:08:52.166 Dataset Management Command: Supported 00:08:52.166 Write Zeroes Command: Supported 00:08:52.166 Set Features Save Field: Supported 00:08:52.166 Reservations: Not Supported 00:08:52.166 Timestamp: Supported 00:08:52.166 Copy: Supported 00:08:52.166 Volatile Write Cache: Present 00:08:52.166 Atomic Write Unit (Normal): 1 00:08:52.166 Atomic Write Unit (PFail): 1 00:08:52.166 Atomic Compare & Write Unit: 1 00:08:52.166 Fused Compare & Write: Not Supported 00:08:52.166 Scatter-Gather List 00:08:52.166 SGL Command Set: Supported 00:08:52.166 SGL Keyed: Not Supported 00:08:52.166 SGL Bit Bucket Descriptor: Not Supported 00:08:52.166 SGL Metadata Pointer: Not Supported 00:08:52.166 Oversized SGL: Not Supported 00:08:52.166 SGL Metadata Address: Not Supported 00:08:52.166 SGL Offset: Not Supported 00:08:52.166 Transport SGL Data Block: Not Supported 00:08:52.166 Replay Protected Memory Block: Not Supported 00:08:52.166 00:08:52.166 Firmware Slot Information 00:08:52.166 ========================= 00:08:52.166 Active slot: 1 00:08:52.166 Slot 1 Firmware Revision: 1.0 00:08:52.166 00:08:52.166 00:08:52.166 Commands Supported and Effects 00:08:52.166 ============================== 00:08:52.166 Admin Commands 00:08:52.166 -------------- 00:08:52.166 Delete I/O Submission Queue (00h): Supported 00:08:52.166 Create I/O Submission Queue (01h): Supported 00:08:52.166 Get Log Page (02h): Supported 00:08:52.166 Delete I/O Completion Queue (04h): Supported 00:08:52.166 Create I/O Completion Queue (05h): Supported 00:08:52.166 Identify (06h): Supported 00:08:52.166 Abort (08h): Supported 00:08:52.166 Set Features (09h): Supported 00:08:52.166 Get Features (0Ah): Supported 00:08:52.166 Asynchronous Event Request (0Ch): Supported 00:08:52.166 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.166 Directive Send (19h): Supported 00:08:52.166 Directive Receive (1Ah): Supported 00:08:52.166 Virtualization Management (1Ch): Supported 00:08:52.166 Doorbell Buffer Config (7Ch): Supported 00:08:52.166 Format NVM (80h): Supported LBA-Change 00:08:52.166 I/O Commands 00:08:52.166 ------------ 00:08:52.166 Flush (00h): Supported LBA-Change 00:08:52.166 Write (01h): Supported LBA-Change 00:08:52.166 Read (02h): Supported 00:08:52.166 Compare (05h): Supported 00:08:52.166 Write Zeroes (08h): Supported LBA-Change 00:08:52.166 Dataset Management (09h): Supported LBA-Change 00:08:52.166 Unknown (0Ch): Supported 00:08:52.166 Unknown (12h): Supported 00:08:52.166 Copy (19h): Supported LBA-Change 00:08:52.166 Unknown (1Dh): Supported LBA-Change 00:08:52.166 00:08:52.166 Error Log 00:08:52.166 ========= 00:08:52.166 00:08:52.166 Arbitration 00:08:52.166 =========== 00:08:52.166 Arbitration Burst: no limit 00:08:52.166 00:08:52.166 Power Management 00:08:52.166 ================ 00:08:52.166 Number of Power States: 1 00:08:52.166 Current Power State: Power State #0 00:08:52.166 Power State #0: 00:08:52.166 Max Power: 25.00 W 00:08:52.166 Non-Operational State: Operational 00:08:52.166 Entry Latency: 16 microseconds 00:08:52.166 Exit Latency: 4 microseconds 00:08:52.166 Relative Read Throughput: 0 00:08:52.166 Relative Read Latency: 0 00:08:52.166 Relative Write Throughput: 0 00:08:52.166 Relative Write Latency: 0 00:08:52.166 Idle Power: Not Reported 00:08:52.166 Active Power: Not Reported 00:08:52.166 Non-Operational Permissive Mode: Not Supported 00:08:52.166 00:08:52.166 Health Information 00:08:52.166 ================== 00:08:52.166 Critical Warnings: 00:08:52.166 Available Spare Space: OK 00:08:52.166 Temperature: OK 00:08:52.166 Device Reliability: OK 00:08:52.166 Read Only: No 00:08:52.166 Volatile Memory Backup: OK 00:08:52.166 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.167 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.167 Available Spare: 0% 00:08:52.167 Available Spare Threshold: 0% 00:08:52.167 Life Percentage Used: 0% 00:08:52.167 Data Units Read: 879 00:08:52.167 Data Units Written: 808 00:08:52.167 Host Read Commands: 39045 00:08:52.167 Host Write Commands: 38468 00:08:52.167 Controller Busy Time: 0 minutes 00:08:52.167 Power Cycles: 0 00:08:52.167 Power On Hours: 0 hours 00:08:52.167 Unsafe Shutdowns: 0 00:08:52.167 Unrecoverable Media Errors: 0 00:08:52.167 Lifetime Error Log Entries: 0 00:08:52.167 Warning Temperature Time: 0 minutes 00:08:52.167 Critical Temperature Time: 0 minutes 00:08:52.167 00:08:52.167 Number of Queues 00:08:52.167 ================ 00:08:52.167 Number of I/O Submission Queues: 64 00:08:52.167 Number of I/O Completion Queues: 64 00:08:52.167 00:08:52.167 ZNS Specific Controller Data 00:08:52.167 ============================ 00:08:52.167 Zone Append Size Limit: 0 00:08:52.167 00:08:52.167 00:08:52.167 Active Namespaces 00:08:52.167 ================= 00:08:52.167 Namespace ID:1 00:08:52.167 Error Recovery Timeout: Unlimited 00:08:52.167 Command Set Identifier: NVM (00h) 00:08:52.167 Deallocate: Supported 00:08:52.167 Deallocated/Unwritten Error: Supported 00:08:52.167 Deallocated Read Value: All 0x00 00:08:52.167 Deallocate in Write Zeroes: Not Supported 00:08:52.167 Deallocated Guard Field: 0xFFFF 00:08:52.167 Flush: Supported 00:08:52.167 Reservation: Not Supported 00:08:52.167 Namespace Sharing Capabilities: Multiple Controllers 00:08:52.167 Size (in LBAs): 262144 (1GiB) 00:08:52.167 Capacity (in LBAs): 262144 (1GiB) 00:08:52.167 Utilization (in LBAs): 262144 (1GiB) 00:08:52.167 Thin Provisioning: Not Supported 00:08:52.167 Per-NS Atomic Units: No 00:08:52.167 Maximum Single Source Range Length: 128 00:08:52.167 Maximum Copy Length: 128 00:08:52.167 Maximum Source Range Count: 128 00:08:52.167 NGUID/EUI64 Never Reused: No 00:08:52.167 Namespace Write Protected: No 00:08:52.167 Endurance group ID: 1 00:08:52.167 Number of LBA Formats: 8 00:08:52.167 Current LBA Format: LBA Format #04 00:08:52.167 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.167 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.167 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.167 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.167 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.167 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.167 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.167 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.167 00:08:52.167 Get Feature FDP: 00:08:52.167 ================ 00:08:52.167 Enabled: Yes 00:08:52.167 FDP configuration index: 0 00:08:52.167 00:08:52.167 FDP configurations log page 00:08:52.167 =========================== 00:08:52.167 Number of FDP configurations: 1 00:08:52.167 Version: 0 00:08:52.167 Size: 112 00:08:52.167 FDP Configuration Descriptor: 0 00:08:52.167 Descriptor Size: 96 00:08:52.167 Reclaim Group Identifier format: 2 00:08:52.167 FDP Volatile Write Cache: Not Present 00:08:52.167 FDP Configuration: Valid 00:08:52.167 Vendor Specific Size: 0 00:08:52.167 Number of Reclaim Groups: 2 00:08:52.167 Number of Recalim Unit Handles: 8 00:08:52.167 Max Placement Identifiers: 128 00:08:52.167 Number of Namespaces Suppprted: 256 00:08:52.167 Reclaim unit Nominal Size: 6000000 bytes 00:08:52.167 Estimated Reclaim Unit Time Limit: Not Reported 00:08:52.167 RUH Desc #000: RUH Type: Initially Isolated 00:08:52.167 RUH Desc #001: RUH Type: Initially Isolated 00:08:52.167 RUH Desc #002: RUH Type: Initially Isolated 00:08:52.167 RUH Desc #003: RUH Type: Initially Isolated 00:08:52.167 RUH Desc #004: RUH Type: Initially Isolated 00:08:52.167 RUH Desc #005: RUH Type: Initially Isolated 00:08:52.167 RUH Desc #006: RUH Type: Initially Isolated 00:08:52.167 RUH Desc #007: RUH Type: Initially Isolated 00:08:52.167 00:08:52.167 FDP reclaim unit handle usage log page 00:08:52.167 ====================================== 00:08:52.167 Number of Reclaim Unit Handles: 8 00:08:52.167 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:52.167 RUH Usage Desc #001: RUH Attributes: Unused 00:08:52.167 RUH Usage Desc #002: RUH Attributes: Unused 00:08:52.167 RUH Usage Desc #003: RUH Attributes: Unused 00:08:52.167 RUH Usage Desc #004: RUH Attributes: Unused 00:08:52.167 RUH Usage Desc #005: RUH Attributes: Unused 00:08:52.167 RUH Usage Desc #006: RUH Attributes: Unused 00:08:52.167 RUH Usage Desc #007: RUH Attributes: Unused 00:08:52.167 00:08:52.167 FDP statistics log page 00:08:52.167 ======================= 00:08:52.167 Host bytes with metadata written: 525180928 00:08:52.167 Media bytes with metadata written: 525238272 00:08:52.167 Media bytes erased: 0 00:08:52.167 00:08:52.167 FDP events log page 00:08:52.167 =================== 00:08:52.167 Number of FDP events: 0 00:08:52.167 00:08:52.167 NVM Specific Namespace Data 00:08:52.167 =========================== 00:08:52.167 Logical Block Storage Tag Mask: 0 00:08:52.167 Protection Information Capabilities: 00:08:52.167 16b Guard Protection Information Storage Tag Support: No 00:08:52.167 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.167 Storage Tag Check Read Support: No 00:08:52.167 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.167 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.167 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.167 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.167 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.167 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.167 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.167 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.167 ===================================================== 00:08:52.167 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.167 ===================================================== 00:08:52.167 Controller Capabilities/Features 00:08:52.167 ================================ 00:08:52.167 Vendor ID: 1b36 00:08:52.167 Subsystem Vendor ID: 1af4 00:08:52.167 Serial Number: 12342 00:08:52.167 Model Number: QEMU NVMe Ctrl 00:08:52.167 Firmware Version: 8.0.0 00:08:52.167 Recommended Arb Burst: 6 00:08:52.167 IEEE OUI Identifier: 00 54 52 00:08:52.167 Multi-path I/O 00:08:52.167 May have multiple subsystem ports: No 00:08:52.167 May have multiple controllers: No 00:08:52.167 Associated with SR-IOV VF: No 00:08:52.167 Max Data Transfer Size: 524288 00:08:52.167 Max Number of Namespaces: 256 00:08:52.167 Max Number of I/O Queues: 64 00:08:52.167 NVMe Specification Version (VS): 1.4 00:08:52.167 NVMe Specification Version (Identify): 1.4 00:08:52.167 Maximum Queue Entries: 2048 00:08:52.167 Contiguous Queues Required: Yes 00:08:52.167 Arbitration Mechanisms Supported 00:08:52.167 Weighted Round Robin: Not Supported 00:08:52.167 Vendor Specific: Not Supported 00:08:52.167 Reset Timeout: 7500 ms 00:08:52.167 Doorbell Stride: 4 bytes 00:08:52.167 NVM Subsystem Reset: Not Supported 00:08:52.167 Command Sets Supported 00:08:52.167 NVM Command Set: Supported 00:08:52.167 Boot Partition: Not Supported 00:08:52.167 Memory Page Size Minimum: 4096 bytes 00:08:52.167 Memory Page Size Maximum: 65536 bytes 00:08:52.167 Persistent Memory Region: Not Supported 00:08:52.168 Optional Asynchronous Events Supported 00:08:52.168 Namespace Attribute Notices: Supported 00:08:52.168 Firmware Activation Notices: Not Supported 00:08:52.168 ANA Change Notices: Not Supported 00:08:52.168 PLE Aggregate Log Change Notices: Not Supported 00:08:52.168 LBA Status Info Alert Notices: Not Supported 00:08:52.168 EGE Aggregate Log Change Notices: Not Supported 00:08:52.168 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.168 Zone Descriptor Change Notices: Not Supported 00:08:52.168 Discovery Log Change Notices: Not Supported 00:08:52.168 Controller Attributes 00:08:52.168 128-bit Host Identifier: Not Supported 00:08:52.168 Non-Operational Permissive Mode: Not Supported 00:08:52.168 NVM Sets: Not Supported 00:08:52.168 Read Recovery Levels: Not Supported 00:08:52.168 Endurance Groups: Not Supported 00:08:52.168 Predictable Latency Mode: Not Supported 00:08:52.168 Traffic Based Keep ALive: Not Supported 00:08:52.168 Namespace Granularity: Not Supported 00:08:52.168 SQ Associations: Not Supported 00:08:52.168 UUID List: Not Supported 00:08:52.168 Multi-Domain Subsystem: Not Supported 00:08:52.168 Fixed Capacity Management: Not Supported 00:08:52.168 Variable Capacity Management: Not Supported 00:08:52.168 Delete Endurance Group: Not Supported 00:08:52.168 Delete NVM Set: Not Supported 00:08:52.168 Extended LBA Formats Supported: Supported 00:08:52.168 Flexible Data Placement Supported: Not Supported 00:08:52.168 00:08:52.168 Controller Memory Buffer Support 00:08:52.168 ================================ 00:08:52.168 Supported: No 00:08:52.168 00:08:52.168 Persistent Memory Region Support 00:08:52.168 ================================ 00:08:52.168 Supported: No 00:08:52.168 00:08:52.168 Admin Command Set Attributes 00:08:52.168 ============================ 00:08:52.168 Security Send/Receive: Not Supported 00:08:52.168 Format NVM: Supported 00:08:52.168 Firmware Activate/Download: Not Supported 00:08:52.168 Namespace Management: Supported 00:08:52.168 Device Self-Test: Not Supported 00:08:52.168 Directives: Supported 00:08:52.168 NVMe-MI: Not Supported 00:08:52.168 Virtualization Management: Not Supported 00:08:52.168 Doorbell Buffer Config: Supported 00:08:52.168 Get LBA Status Capability: Not Supported 00:08:52.168 Command & Feature Lockdown Capability: Not Supported 00:08:52.168 Abort Command Limit: 4 00:08:52.168 Async Event Request Limit: 4 00:08:52.168 Number of Firmware Slots: N/A 00:08:52.168 Firmware Slot 1 Read-Only: N/A 00:08:52.168 Firmware Activation Without Reset: N/A 00:08:52.168 Multiple Update Detection Support: N/A 00:08:52.168 Firmware Update Granularity: No Information Provided 00:08:52.168 Per-Namespace SMART Log: Yes 00:08:52.168 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.168 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:52.168 Command Effects Log Page: Supported 00:08:52.168 Get Log Page Extended Data: Supported 00:08:52.168 Telemetry Log Pages: Not Supported 00:08:52.168 Persistent Event Log Pages: Not Supported 00:08:52.168 Supported Log Pages Log Page: May Support 00:08:52.168 Commands Supported & Effects Log Page: Not Supported 00:08:52.168 Feature Identifiers & Effects Log Page:May Support 00:08:52.168 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.168 Data Area 4 for Telemetry Log: Not Supported 00:08:52.168 Error Log Page Entries Supported: 1 00:08:52.168 Keep Alive: Not Supported 00:08:52.168 00:08:52.168 NVM Command Set Attributes 00:08:52.168 ========================== 00:08:52.168 Submission Queue Entry Size 00:08:52.168 Max: 64 00:08:52.168 Min: 64 00:08:52.168 Completion Queue Entry Size 00:08:52.168 Max: 16 00:08:52.168 Min: 16 00:08:52.168 Number of Namespaces: 256 00:08:52.168 Compare Command: Supported 00:08:52.168 Write Uncorrectable Command: Not Supported 00:08:52.168 Dataset Management Command: Supported 00:08:52.168 Write Zeroes Command: Supported 00:08:52.168 Set Features Save Field: Supported 00:08:52.168 Reservations: Not Supported 00:08:52.168 Timestamp: Supported 00:08:52.168 Copy: Supported 00:08:52.168 Volatile Write Cache: Present 00:08:52.168 Atomic Write Unit (Normal): 1 00:08:52.168 Atomic Write Unit (PFail): 1 00:08:52.168 Atomic Compare & Write Unit: 1 00:08:52.168 Fused Compare & Write: Not Supported 00:08:52.168 Scatter-Gather List 00:08:52.168 SGL Command Set: Supported 00:08:52.168 SGL Keyed: Not Supported 00:08:52.168 SGL Bit Bucket Descriptor: Not Supported 00:08:52.168 SGL Metadata Pointer: Not Supported 00:08:52.168 Oversized SGL: Not Supported 00:08:52.168 SGL Metadata Address: Not Supported 00:08:52.168 SGL Offset: Not Supported 00:08:52.168 Transport SGL Data Block: Not Supported 00:08:52.168 Replay Protected Memory Block: Not Supported 00:08:52.168 00:08:52.168 Firmware Slot Information 00:08:52.168 ========================= 00:08:52.168 Active slot: 1 00:08:52.168 Slot 1 Firmware Revision: 1.0 00:08:52.168 00:08:52.168 00:08:52.168 Commands Supported and Effects 00:08:52.168 ============================== 00:08:52.168 Admin Commands 00:08:52.168 -------------- 00:08:52.168 Delete I/O Submission Queue (00h): Supported 00:08:52.168 Create I/O Submission Queue (01h): Supported 00:08:52.168 Get Log Page (02h): Supported 00:08:52.168 Delete I/O Completion Queue (04h): Supported 00:08:52.168 Create I/O Completion Queue (05h): Supported 00:08:52.168 Identify (06h): Supported 00:08:52.168 Abort (08h): Supported 00:08:52.168 Set Features (09h): Supported 00:08:52.168 Get Features (0Ah): Supported 00:08:52.168 Asynchronous Event Request (0Ch): Supported 00:08:52.168 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.168 Directive Send (19h): Supported 00:08:52.168 Directive Receive (1Ah): Supported 00:08:52.168 Virtualization Management (1Ch): Supported 00:08:52.168 Doorbell Buffer Config (7Ch): Supported 00:08:52.168 Format NVM (80h): Supported LBA-Change 00:08:52.168 I/O Commands 00:08:52.168 ------------ 00:08:52.168 Flush (00h): Supported LBA-Change 00:08:52.168 Write (01h): Supported LBA-Change 00:08:52.168 Read (02h): Supported 00:08:52.168 Compare (05h): Supported 00:08:52.168 Write Zeroes (08h): Supported LBA-Change 00:08:52.168 Dataset Management (09h): Supported LBA-Change 00:08:52.168 Unknown (0Ch): Supported 00:08:52.168 Unknown (12h): Supported 00:08:52.168 Copy (19h): Supported LBA-Change 00:08:52.168 Unknown (1Dh): Supported LBA-Change 00:08:52.168 00:08:52.168 Error Log 00:08:52.168 ========= 00:08:52.168 00:08:52.168 Arbitration 00:08:52.168 =========== 00:08:52.168 Arbitration Burst: no limit 00:08:52.168 00:08:52.168 Power Management 00:08:52.168 ================ 00:08:52.168 Number of Power States: 1 00:08:52.168 Current Power State: Power State #0 00:08:52.168 Power State #0: 00:08:52.168 Max Power: 25.00 W 00:08:52.168 Non-Operational State: Operational 00:08:52.168 Entry Latency: 16 microseconds 00:08:52.168 Exit Latency: 4 microseconds 00:08:52.168 Relative Read Throughput: 0 00:08:52.168 Relative Read Latency: 0 00:08:52.168 Relative Write Throughput: 0 00:08:52.168 Relative Write Latency: 0 00:08:52.168 Idle Power: Not Reported 00:08:52.168 Active Power: Not Reported 00:08:52.168 Non-Operational Permissive Mode: Not Supported 00:08:52.168 00:08:52.168 Health Information 00:08:52.168 ================== 00:08:52.168 Critical Warnings: 00:08:52.168 Available Spare Space: OK 00:08:52.168 Temperature: OK 00:08:52.168 Device Reliability: OK 00:08:52.168 Read Only: No 00:08:52.168 Volatile Memory Backup: OK 00:08:52.168 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.168 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.168 Available Spare: 0% 00:08:52.168 Available Spare Threshold: 0% 00:08:52.168 Life Percentage Used: 0% 00:08:52.168 Data Units Read: 2422 00:08:52.168 Data Units Written: 2209 00:08:52.168 Host Read Commands: 115461 00:08:52.168 Host Write Commands: 113730 00:08:52.168 Controller Busy Time: 0 minutes 00:08:52.168 Power Cycles: 0 00:08:52.168 Power On Hours: 0 hours 00:08:52.168 Unsafe Shutdowns: 0 00:08:52.168 Unrecoverable Media Errors: 0 00:08:52.168 Lifetime Error Log Entries: 0 00:08:52.168 Warning Temperature Time: 0 minutes 00:08:52.168 Critical Temperature Time: 0 minutes 00:08:52.168 00:08:52.168 Number of Queues 00:08:52.168 ================ 00:08:52.168 Number of I/O Submission Queues: 64 00:08:52.168 Number of I/O Completion Queues: 64 00:08:52.168 00:08:52.168 ZNS Specific Controller Data 00:08:52.168 ============================ 00:08:52.168 Zone Append Size Limit: 0 00:08:52.168 00:08:52.168 00:08:52.168 Active Namespaces 00:08:52.168 ================= 00:08:52.168 Namespace ID:1 00:08:52.168 Error Recovery Timeout: Unlimited 00:08:52.169 Command Set Identifier: NVM (00h) 00:08:52.169 Deallocate: Supported 00:08:52.169 Deallocated/Unwritten Error: Supported 00:08:52.169 Deallocated Read Value: All 0x00 00:08:52.169 Deallocate in Write Zeroes: Not Supported 00:08:52.169 Deallocated Guard Field: 0xFFFF 00:08:52.169 Flush: Supported 00:08:52.169 Reservation: Not Supported 00:08:52.169 Namespace Sharing Capabilities: Private 00:08:52.169 Size (in LBAs): 1048576 (4GiB) 00:08:52.169 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.169 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.169 Thin Provisioning: Not Supported 00:08:52.169 Per-NS Atomic Units: No 00:08:52.169 Maximum Single Source Range Length: 128 00:08:52.169 Maximum Copy Length: 128 00:08:52.169 Maximum Source Range Count: 128 00:08:52.169 NGUID/EUI64 Never Reused: No 00:08:52.169 Namespace Write Protected: No 00:08:52.169 Number of LBA Formats: 8 00:08:52.169 Current LBA Format: LBA Format #04 00:08:52.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.169 00:08:52.169 NVM Specific Namespace Data 00:08:52.169 =========================== 00:08:52.169 Logical Block Storage Tag Mask: 0 00:08:52.169 Protection Information Capabilities: 00:08:52.169 16b Guard Protection Information Storage Tag Support: No 00:08:52.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.169 Storage Tag Check Read Support: No 00:08:52.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Namespace ID:2 00:08:52.169 Error Recovery Timeout: Unlimited 00:08:52.169 Command Set Identifier: NVM (00h) 00:08:52.169 Deallocate: Supported 00:08:52.169 Deallocated/Unwritten Error: Supported 00:08:52.169 Deallocated Read Value: All 0x00 00:08:52.169 Deallocate in Write Zeroes: Not Supported 00:08:52.169 Deallocated Guard Field: 0xFFFF 00:08:52.169 Flush: Supported 00:08:52.169 Reservation: Not Supported 00:08:52.169 Namespace Sharing Capabilities: Private 00:08:52.169 Size (in LBAs): 1048576 (4GiB) 00:08:52.169 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.169 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.169 Thin Provisioning: Not Supported 00:08:52.169 Per-NS Atomic Units: No 00:08:52.169 Maximum Single Source Range Length: 128 00:08:52.169 Maximum Copy Length: 128 00:08:52.169 Maximum Source Range Count: 128 00:08:52.169 NGUID/EUI64 Never Reused: No 00:08:52.169 Namespace Write Protected: No 00:08:52.169 Number of LBA Formats: 8 00:08:52.169 Current LBA Format: LBA Format #04 00:08:52.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.169 00:08:52.169 NVM Specific Namespace Data 00:08:52.169 =========================== 00:08:52.169 Logical Block Storage Tag Mask: 0 00:08:52.169 Protection Information Capabilities: 00:08:52.169 16b Guard Protection Information Storage Tag Support: No 00:08:52.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.169 Storage Tag Check Read Support: No 00:08:52.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Namespace ID:3 00:08:52.169 Error Recovery Timeout: Unlimited 00:08:52.169 Command Set Identifier: NVM (00h) 00:08:52.169 Deallocate: Supported 00:08:52.169 Deallocated/Unwritten Error: Supported 00:08:52.169 Deallocated Read Value: All 0x00 00:08:52.169 Deallocate in Write Zeroes: Not Supported 00:08:52.169 Deallocated Guard Field: 0xFFFF 00:08:52.169 Flush: Supported 00:08:52.169 Reservation: Not Supported 00:08:52.169 Namespace Sharing Capabilities: Private 00:08:52.169 Size (in LBAs): 1048576 (4GiB) 00:08:52.169 Capacity (in LBAs):[2024-11-20 10:45:41.156046] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63963 terminated unexpected 00:08:52.169 1048576 (4GiB) 00:08:52.169 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.169 Thin Provisioning: Not Supported 00:08:52.169 Per-NS Atomic Units: No 00:08:52.169 Maximum Single Source Range Length: 128 00:08:52.169 Maximum Copy Length: 128 00:08:52.169 Maximum Source Range Count: 128 00:08:52.169 NGUID/EUI64 Never Reused: No 00:08:52.169 Namespace Write Protected: No 00:08:52.169 Number of LBA Formats: 8 00:08:52.169 Current LBA Format: LBA Format #04 00:08:52.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.169 00:08:52.169 NVM Specific Namespace Data 00:08:52.169 =========================== 00:08:52.169 Logical Block Storage Tag Mask: 0 00:08:52.169 Protection Information Capabilities: 00:08:52.169 16b Guard Protection Information Storage Tag Support: No 00:08:52.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.169 Storage Tag Check Read Support: No 00:08:52.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.169 10:45:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:52.169 10:45:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:52.430 ===================================================== 00:08:52.430 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.430 ===================================================== 00:08:52.430 Controller Capabilities/Features 00:08:52.430 ================================ 00:08:52.430 Vendor ID: 1b36 00:08:52.430 Subsystem Vendor ID: 1af4 00:08:52.430 Serial Number: 12340 00:08:52.430 Model Number: QEMU NVMe Ctrl 00:08:52.430 Firmware Version: 8.0.0 00:08:52.430 Recommended Arb Burst: 6 00:08:52.430 IEEE OUI Identifier: 00 54 52 00:08:52.430 Multi-path I/O 00:08:52.430 May have multiple subsystem ports: No 00:08:52.430 May have multiple controllers: No 00:08:52.430 Associated with SR-IOV VF: No 00:08:52.430 Max Data Transfer Size: 524288 00:08:52.430 Max Number of Namespaces: 256 00:08:52.430 Max Number of I/O Queues: 64 00:08:52.430 NVMe Specification Version (VS): 1.4 00:08:52.430 NVMe Specification Version (Identify): 1.4 00:08:52.430 Maximum Queue Entries: 2048 00:08:52.430 Contiguous Queues Required: Yes 00:08:52.430 Arbitration Mechanisms Supported 00:08:52.430 Weighted Round Robin: Not Supported 00:08:52.430 Vendor Specific: Not Supported 00:08:52.430 Reset Timeout: 7500 ms 00:08:52.430 Doorbell Stride: 4 bytes 00:08:52.430 NVM Subsystem Reset: Not Supported 00:08:52.430 Command Sets Supported 00:08:52.430 NVM Command Set: Supported 00:08:52.430 Boot Partition: Not Supported 00:08:52.430 Memory Page Size Minimum: 4096 bytes 00:08:52.430 Memory Page Size Maximum: 65536 bytes 00:08:52.430 Persistent Memory Region: Not Supported 00:08:52.430 Optional Asynchronous Events Supported 00:08:52.430 Namespace Attribute Notices: Supported 00:08:52.430 Firmware Activation Notices: Not Supported 00:08:52.430 ANA Change Notices: Not Supported 00:08:52.430 PLE Aggregate Log Change Notices: Not Supported 00:08:52.430 LBA Status Info Alert Notices: Not Supported 00:08:52.430 EGE Aggregate Log Change Notices: Not Supported 00:08:52.430 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.430 Zone Descriptor Change Notices: Not Supported 00:08:52.430 Discovery Log Change Notices: Not Supported 00:08:52.430 Controller Attributes 00:08:52.430 128-bit Host Identifier: Not Supported 00:08:52.430 Non-Operational Permissive Mode: Not Supported 00:08:52.430 NVM Sets: Not Supported 00:08:52.430 Read Recovery Levels: Not Supported 00:08:52.430 Endurance Groups: Not Supported 00:08:52.430 Predictable Latency Mode: Not Supported 00:08:52.430 Traffic Based Keep ALive: Not Supported 00:08:52.430 Namespace Granularity: Not Supported 00:08:52.430 SQ Associations: Not Supported 00:08:52.430 UUID List: Not Supported 00:08:52.430 Multi-Domain Subsystem: Not Supported 00:08:52.430 Fixed Capacity Management: Not Supported 00:08:52.430 Variable Capacity Management: Not Supported 00:08:52.430 Delete Endurance Group: Not Supported 00:08:52.430 Delete NVM Set: Not Supported 00:08:52.430 Extended LBA Formats Supported: Supported 00:08:52.430 Flexible Data Placement Supported: Not Supported 00:08:52.430 00:08:52.430 Controller Memory Buffer Support 00:08:52.430 ================================ 00:08:52.431 Supported: No 00:08:52.431 00:08:52.431 Persistent Memory Region Support 00:08:52.431 ================================ 00:08:52.431 Supported: No 00:08:52.431 00:08:52.431 Admin Command Set Attributes 00:08:52.431 ============================ 00:08:52.431 Security Send/Receive: Not Supported 00:08:52.431 Format NVM: Supported 00:08:52.431 Firmware Activate/Download: Not Supported 00:08:52.431 Namespace Management: Supported 00:08:52.431 Device Self-Test: Not Supported 00:08:52.431 Directives: Supported 00:08:52.431 NVMe-MI: Not Supported 00:08:52.431 Virtualization Management: Not Supported 00:08:52.431 Doorbell Buffer Config: Supported 00:08:52.431 Get LBA Status Capability: Not Supported 00:08:52.431 Command & Feature Lockdown Capability: Not Supported 00:08:52.431 Abort Command Limit: 4 00:08:52.431 Async Event Request Limit: 4 00:08:52.431 Number of Firmware Slots: N/A 00:08:52.431 Firmware Slot 1 Read-Only: N/A 00:08:52.431 Firmware Activation Without Reset: N/A 00:08:52.431 Multiple Update Detection Support: N/A 00:08:52.431 Firmware Update Granularity: No Information Provided 00:08:52.431 Per-Namespace SMART Log: Yes 00:08:52.431 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.431 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:52.431 Command Effects Log Page: Supported 00:08:52.431 Get Log Page Extended Data: Supported 00:08:52.431 Telemetry Log Pages: Not Supported 00:08:52.431 Persistent Event Log Pages: Not Supported 00:08:52.431 Supported Log Pages Log Page: May Support 00:08:52.431 Commands Supported & Effects Log Page: Not Supported 00:08:52.431 Feature Identifiers & Effects Log Page:May Support 00:08:52.431 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.431 Data Area 4 for Telemetry Log: Not Supported 00:08:52.431 Error Log Page Entries Supported: 1 00:08:52.431 Keep Alive: Not Supported 00:08:52.431 00:08:52.431 NVM Command Set Attributes 00:08:52.431 ========================== 00:08:52.431 Submission Queue Entry Size 00:08:52.431 Max: 64 00:08:52.431 Min: 64 00:08:52.431 Completion Queue Entry Size 00:08:52.431 Max: 16 00:08:52.431 Min: 16 00:08:52.431 Number of Namespaces: 256 00:08:52.431 Compare Command: Supported 00:08:52.431 Write Uncorrectable Command: Not Supported 00:08:52.431 Dataset Management Command: Supported 00:08:52.431 Write Zeroes Command: Supported 00:08:52.431 Set Features Save Field: Supported 00:08:52.431 Reservations: Not Supported 00:08:52.431 Timestamp: Supported 00:08:52.431 Copy: Supported 00:08:52.431 Volatile Write Cache: Present 00:08:52.431 Atomic Write Unit (Normal): 1 00:08:52.431 Atomic Write Unit (PFail): 1 00:08:52.431 Atomic Compare & Write Unit: 1 00:08:52.431 Fused Compare & Write: Not Supported 00:08:52.431 Scatter-Gather List 00:08:52.431 SGL Command Set: Supported 00:08:52.431 SGL Keyed: Not Supported 00:08:52.431 SGL Bit Bucket Descriptor: Not Supported 00:08:52.431 SGL Metadata Pointer: Not Supported 00:08:52.431 Oversized SGL: Not Supported 00:08:52.431 SGL Metadata Address: Not Supported 00:08:52.431 SGL Offset: Not Supported 00:08:52.431 Transport SGL Data Block: Not Supported 00:08:52.431 Replay Protected Memory Block: Not Supported 00:08:52.431 00:08:52.431 Firmware Slot Information 00:08:52.431 ========================= 00:08:52.431 Active slot: 1 00:08:52.431 Slot 1 Firmware Revision: 1.0 00:08:52.431 00:08:52.431 00:08:52.431 Commands Supported and Effects 00:08:52.431 ============================== 00:08:52.431 Admin Commands 00:08:52.431 -------------- 00:08:52.431 Delete I/O Submission Queue (00h): Supported 00:08:52.431 Create I/O Submission Queue (01h): Supported 00:08:52.431 Get Log Page (02h): Supported 00:08:52.431 Delete I/O Completion Queue (04h): Supported 00:08:52.431 Create I/O Completion Queue (05h): Supported 00:08:52.431 Identify (06h): Supported 00:08:52.431 Abort (08h): Supported 00:08:52.431 Set Features (09h): Supported 00:08:52.431 Get Features (0Ah): Supported 00:08:52.431 Asynchronous Event Request (0Ch): Supported 00:08:52.431 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.431 Directive Send (19h): Supported 00:08:52.431 Directive Receive (1Ah): Supported 00:08:52.431 Virtualization Management (1Ch): Supported 00:08:52.431 Doorbell Buffer Config (7Ch): Supported 00:08:52.431 Format NVM (80h): Supported LBA-Change 00:08:52.431 I/O Commands 00:08:52.431 ------------ 00:08:52.431 Flush (00h): Supported LBA-Change 00:08:52.431 Write (01h): Supported LBA-Change 00:08:52.431 Read (02h): Supported 00:08:52.431 Compare (05h): Supported 00:08:52.431 Write Zeroes (08h): Supported LBA-Change 00:08:52.431 Dataset Management (09h): Supported LBA-Change 00:08:52.431 Unknown (0Ch): Supported 00:08:52.431 Unknown (12h): Supported 00:08:52.431 Copy (19h): Supported LBA-Change 00:08:52.431 Unknown (1Dh): Supported LBA-Change 00:08:52.431 00:08:52.431 Error Log 00:08:52.431 ========= 00:08:52.431 00:08:52.431 Arbitration 00:08:52.431 =========== 00:08:52.431 Arbitration Burst: no limit 00:08:52.431 00:08:52.431 Power Management 00:08:52.431 ================ 00:08:52.431 Number of Power States: 1 00:08:52.431 Current Power State: Power State #0 00:08:52.431 Power State #0: 00:08:52.431 Max Power: 25.00 W 00:08:52.431 Non-Operational State: Operational 00:08:52.431 Entry Latency: 16 microseconds 00:08:52.431 Exit Latency: 4 microseconds 00:08:52.431 Relative Read Throughput: 0 00:08:52.431 Relative Read Latency: 0 00:08:52.431 Relative Write Throughput: 0 00:08:52.431 Relative Write Latency: 0 00:08:52.431 Idle Power: Not Reported 00:08:52.431 Active Power: Not Reported 00:08:52.431 Non-Operational Permissive Mode: Not Supported 00:08:52.431 00:08:52.431 Health Information 00:08:52.431 ================== 00:08:52.431 Critical Warnings: 00:08:52.431 Available Spare Space: OK 00:08:52.431 Temperature: OK 00:08:52.431 Device Reliability: OK 00:08:52.431 Read Only: No 00:08:52.431 Volatile Memory Backup: OK 00:08:52.431 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.431 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.431 Available Spare: 0% 00:08:52.431 Available Spare Threshold: 0% 00:08:52.431 Life Percentage Used: 0% 00:08:52.431 Data Units Read: 767 00:08:52.431 Data Units Written: 695 00:08:52.431 Host Read Commands: 37869 00:08:52.431 Host Write Commands: 37655 00:08:52.431 Controller Busy Time: 0 minutes 00:08:52.431 Power Cycles: 0 00:08:52.431 Power On Hours: 0 hours 00:08:52.431 Unsafe Shutdowns: 0 00:08:52.431 Unrecoverable Media Errors: 0 00:08:52.431 Lifetime Error Log Entries: 0 00:08:52.431 Warning Temperature Time: 0 minutes 00:08:52.431 Critical Temperature Time: 0 minutes 00:08:52.431 00:08:52.431 Number of Queues 00:08:52.431 ================ 00:08:52.431 Number of I/O Submission Queues: 64 00:08:52.431 Number of I/O Completion Queues: 64 00:08:52.431 00:08:52.431 ZNS Specific Controller Data 00:08:52.431 ============================ 00:08:52.431 Zone Append Size Limit: 0 00:08:52.431 00:08:52.431 00:08:52.431 Active Namespaces 00:08:52.431 ================= 00:08:52.431 Namespace ID:1 00:08:52.431 Error Recovery Timeout: Unlimited 00:08:52.431 Command Set Identifier: NVM (00h) 00:08:52.431 Deallocate: Supported 00:08:52.431 Deallocated/Unwritten Error: Supported 00:08:52.432 Deallocated Read Value: All 0x00 00:08:52.432 Deallocate in Write Zeroes: Not Supported 00:08:52.432 Deallocated Guard Field: 0xFFFF 00:08:52.432 Flush: Supported 00:08:52.432 Reservation: Not Supported 00:08:52.432 Metadata Transferred as: Separate Metadata Buffer 00:08:52.432 Namespace Sharing Capabilities: Private 00:08:52.432 Size (in LBAs): 1548666 (5GiB) 00:08:52.432 Capacity (in LBAs): 1548666 (5GiB) 00:08:52.432 Utilization (in LBAs): 1548666 (5GiB) 00:08:52.432 Thin Provisioning: Not Supported 00:08:52.432 Per-NS Atomic Units: No 00:08:52.432 Maximum Single Source Range Length: 128 00:08:52.432 Maximum Copy Length: 128 00:08:52.432 Maximum Source Range Count: 128 00:08:52.432 NGUID/EUI64 Never Reused: No 00:08:52.432 Namespace Write Protected: No 00:08:52.432 Number of LBA Formats: 8 00:08:52.432 Current LBA Format: LBA Format #07 00:08:52.432 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.432 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.432 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.432 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.432 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.432 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.432 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.432 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.432 00:08:52.432 NVM Specific Namespace Data 00:08:52.432 =========================== 00:08:52.432 Logical Block Storage Tag Mask: 0 00:08:52.432 Protection Information Capabilities: 00:08:52.432 16b Guard Protection Information Storage Tag Support: No 00:08:52.432 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.432 Storage Tag Check Read Support: No 00:08:52.432 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.432 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.432 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.432 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.432 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.432 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.432 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.432 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.432 10:45:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:52.432 10:45:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:52.692 ===================================================== 00:08:52.692 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.692 ===================================================== 00:08:52.692 Controller Capabilities/Features 00:08:52.692 ================================ 00:08:52.692 Vendor ID: 1b36 00:08:52.692 Subsystem Vendor ID: 1af4 00:08:52.692 Serial Number: 12341 00:08:52.692 Model Number: QEMU NVMe Ctrl 00:08:52.692 Firmware Version: 8.0.0 00:08:52.692 Recommended Arb Burst: 6 00:08:52.692 IEEE OUI Identifier: 00 54 52 00:08:52.692 Multi-path I/O 00:08:52.692 May have multiple subsystem ports: No 00:08:52.692 May have multiple controllers: No 00:08:52.692 Associated with SR-IOV VF: No 00:08:52.692 Max Data Transfer Size: 524288 00:08:52.692 Max Number of Namespaces: 256 00:08:52.692 Max Number of I/O Queues: 64 00:08:52.692 NVMe Specification Version (VS): 1.4 00:08:52.692 NVMe Specification Version (Identify): 1.4 00:08:52.692 Maximum Queue Entries: 2048 00:08:52.692 Contiguous Queues Required: Yes 00:08:52.692 Arbitration Mechanisms Supported 00:08:52.692 Weighted Round Robin: Not Supported 00:08:52.692 Vendor Specific: Not Supported 00:08:52.692 Reset Timeout: 7500 ms 00:08:52.692 Doorbell Stride: 4 bytes 00:08:52.692 NVM Subsystem Reset: Not Supported 00:08:52.692 Command Sets Supported 00:08:52.692 NVM Command Set: Supported 00:08:52.692 Boot Partition: Not Supported 00:08:52.692 Memory Page Size Minimum: 4096 bytes 00:08:52.692 Memory Page Size Maximum: 65536 bytes 00:08:52.692 Persistent Memory Region: Not Supported 00:08:52.692 Optional Asynchronous Events Supported 00:08:52.692 Namespace Attribute Notices: Supported 00:08:52.692 Firmware Activation Notices: Not Supported 00:08:52.692 ANA Change Notices: Not Supported 00:08:52.692 PLE Aggregate Log Change Notices: Not Supported 00:08:52.692 LBA Status Info Alert Notices: Not Supported 00:08:52.692 EGE Aggregate Log Change Notices: Not Supported 00:08:52.692 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.692 Zone Descriptor Change Notices: Not Supported 00:08:52.692 Discovery Log Change Notices: Not Supported 00:08:52.692 Controller Attributes 00:08:52.692 128-bit Host Identifier: Not Supported 00:08:52.692 Non-Operational Permissive Mode: Not Supported 00:08:52.692 NVM Sets: Not Supported 00:08:52.692 Read Recovery Levels: Not Supported 00:08:52.692 Endurance Groups: Not Supported 00:08:52.692 Predictable Latency Mode: Not Supported 00:08:52.692 Traffic Based Keep ALive: Not Supported 00:08:52.692 Namespace Granularity: Not Supported 00:08:52.692 SQ Associations: Not Supported 00:08:52.692 UUID List: Not Supported 00:08:52.692 Multi-Domain Subsystem: Not Supported 00:08:52.692 Fixed Capacity Management: Not Supported 00:08:52.692 Variable Capacity Management: Not Supported 00:08:52.692 Delete Endurance Group: Not Supported 00:08:52.692 Delete NVM Set: Not Supported 00:08:52.692 Extended LBA Formats Supported: Supported 00:08:52.692 Flexible Data Placement Supported: Not Supported 00:08:52.692 00:08:52.692 Controller Memory Buffer Support 00:08:52.692 ================================ 00:08:52.692 Supported: No 00:08:52.692 00:08:52.692 Persistent Memory Region Support 00:08:52.692 ================================ 00:08:52.692 Supported: No 00:08:52.692 00:08:52.692 Admin Command Set Attributes 00:08:52.692 ============================ 00:08:52.692 Security Send/Receive: Not Supported 00:08:52.692 Format NVM: Supported 00:08:52.692 Firmware Activate/Download: Not Supported 00:08:52.692 Namespace Management: Supported 00:08:52.692 Device Self-Test: Not Supported 00:08:52.692 Directives: Supported 00:08:52.692 NVMe-MI: Not Supported 00:08:52.692 Virtualization Management: Not Supported 00:08:52.692 Doorbell Buffer Config: Supported 00:08:52.692 Get LBA Status Capability: Not Supported 00:08:52.692 Command & Feature Lockdown Capability: Not Supported 00:08:52.692 Abort Command Limit: 4 00:08:52.692 Async Event Request Limit: 4 00:08:52.692 Number of Firmware Slots: N/A 00:08:52.692 Firmware Slot 1 Read-Only: N/A 00:08:52.692 Firmware Activation Without Reset: N/A 00:08:52.692 Multiple Update Detection Support: N/A 00:08:52.692 Firmware Update Granularity: No Information Provided 00:08:52.692 Per-Namespace SMART Log: Yes 00:08:52.692 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.692 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:52.692 Command Effects Log Page: Supported 00:08:52.692 Get Log Page Extended Data: Supported 00:08:52.692 Telemetry Log Pages: Not Supported 00:08:52.692 Persistent Event Log Pages: Not Supported 00:08:52.692 Supported Log Pages Log Page: May Support 00:08:52.692 Commands Supported & Effects Log Page: Not Supported 00:08:52.692 Feature Identifiers & Effects Log Page:May Support 00:08:52.692 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.692 Data Area 4 for Telemetry Log: Not Supported 00:08:52.692 Error Log Page Entries Supported: 1 00:08:52.692 Keep Alive: Not Supported 00:08:52.692 00:08:52.692 NVM Command Set Attributes 00:08:52.692 ========================== 00:08:52.692 Submission Queue Entry Size 00:08:52.692 Max: 64 00:08:52.692 Min: 64 00:08:52.692 Completion Queue Entry Size 00:08:52.692 Max: 16 00:08:52.692 Min: 16 00:08:52.692 Number of Namespaces: 256 00:08:52.692 Compare Command: Supported 00:08:52.692 Write Uncorrectable Command: Not Supported 00:08:52.692 Dataset Management Command: Supported 00:08:52.692 Write Zeroes Command: Supported 00:08:52.692 Set Features Save Field: Supported 00:08:52.692 Reservations: Not Supported 00:08:52.692 Timestamp: Supported 00:08:52.692 Copy: Supported 00:08:52.692 Volatile Write Cache: Present 00:08:52.692 Atomic Write Unit (Normal): 1 00:08:52.692 Atomic Write Unit (PFail): 1 00:08:52.692 Atomic Compare & Write Unit: 1 00:08:52.692 Fused Compare & Write: Not Supported 00:08:52.692 Scatter-Gather List 00:08:52.692 SGL Command Set: Supported 00:08:52.692 SGL Keyed: Not Supported 00:08:52.692 SGL Bit Bucket Descriptor: Not Supported 00:08:52.692 SGL Metadata Pointer: Not Supported 00:08:52.692 Oversized SGL: Not Supported 00:08:52.692 SGL Metadata Address: Not Supported 00:08:52.692 SGL Offset: Not Supported 00:08:52.692 Transport SGL Data Block: Not Supported 00:08:52.692 Replay Protected Memory Block: Not Supported 00:08:52.692 00:08:52.692 Firmware Slot Information 00:08:52.692 ========================= 00:08:52.692 Active slot: 1 00:08:52.692 Slot 1 Firmware Revision: 1.0 00:08:52.692 00:08:52.692 00:08:52.692 Commands Supported and Effects 00:08:52.692 ============================== 00:08:52.692 Admin Commands 00:08:52.692 -------------- 00:08:52.692 Delete I/O Submission Queue (00h): Supported 00:08:52.692 Create I/O Submission Queue (01h): Supported 00:08:52.692 Get Log Page (02h): Supported 00:08:52.692 Delete I/O Completion Queue (04h): Supported 00:08:52.693 Create I/O Completion Queue (05h): Supported 00:08:52.693 Identify (06h): Supported 00:08:52.693 Abort (08h): Supported 00:08:52.693 Set Features (09h): Supported 00:08:52.693 Get Features (0Ah): Supported 00:08:52.693 Asynchronous Event Request (0Ch): Supported 00:08:52.693 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.693 Directive Send (19h): Supported 00:08:52.693 Directive Receive (1Ah): Supported 00:08:52.693 Virtualization Management (1Ch): Supported 00:08:52.693 Doorbell Buffer Config (7Ch): Supported 00:08:52.693 Format NVM (80h): Supported LBA-Change 00:08:52.693 I/O Commands 00:08:52.693 ------------ 00:08:52.693 Flush (00h): Supported LBA-Change 00:08:52.693 Write (01h): Supported LBA-Change 00:08:52.693 Read (02h): Supported 00:08:52.693 Compare (05h): Supported 00:08:52.693 Write Zeroes (08h): Supported LBA-Change 00:08:52.693 Dataset Management (09h): Supported LBA-Change 00:08:52.693 Unknown (0Ch): Supported 00:08:52.693 Unknown (12h): Supported 00:08:52.693 Copy (19h): Supported LBA-Change 00:08:52.693 Unknown (1Dh): Supported LBA-Change 00:08:52.693 00:08:52.693 Error Log 00:08:52.693 ========= 00:08:52.693 00:08:52.693 Arbitration 00:08:52.693 =========== 00:08:52.693 Arbitration Burst: no limit 00:08:52.693 00:08:52.693 Power Management 00:08:52.693 ================ 00:08:52.693 Number of Power States: 1 00:08:52.693 Current Power State: Power State #0 00:08:52.693 Power State #0: 00:08:52.693 Max Power: 25.00 W 00:08:52.693 Non-Operational State: Operational 00:08:52.693 Entry Latency: 16 microseconds 00:08:52.693 Exit Latency: 4 microseconds 00:08:52.693 Relative Read Throughput: 0 00:08:52.693 Relative Read Latency: 0 00:08:52.693 Relative Write Throughput: 0 00:08:52.693 Relative Write Latency: 0 00:08:52.693 Idle Power: Not Reported 00:08:52.693 Active Power: Not Reported 00:08:52.693 Non-Operational Permissive Mode: Not Supported 00:08:52.693 00:08:52.693 Health Information 00:08:52.693 ================== 00:08:52.693 Critical Warnings: 00:08:52.693 Available Spare Space: OK 00:08:52.693 Temperature: OK 00:08:52.693 Device Reliability: OK 00:08:52.693 Read Only: No 00:08:52.693 Volatile Memory Backup: OK 00:08:52.693 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.693 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.693 Available Spare: 0% 00:08:52.693 Available Spare Threshold: 0% 00:08:52.693 Life Percentage Used: 0% 00:08:52.693 Data Units Read: 1171 00:08:52.693 Data Units Written: 1032 00:08:52.693 Host Read Commands: 55649 00:08:52.693 Host Write Commands: 54335 00:08:52.693 Controller Busy Time: 0 minutes 00:08:52.693 Power Cycles: 0 00:08:52.693 Power On Hours: 0 hours 00:08:52.693 Unsafe Shutdowns: 0 00:08:52.693 Unrecoverable Media Errors: 0 00:08:52.693 Lifetime Error Log Entries: 0 00:08:52.693 Warning Temperature Time: 0 minutes 00:08:52.693 Critical Temperature Time: 0 minutes 00:08:52.693 00:08:52.693 Number of Queues 00:08:52.693 ================ 00:08:52.693 Number of I/O Submission Queues: 64 00:08:52.693 Number of I/O Completion Queues: 64 00:08:52.693 00:08:52.693 ZNS Specific Controller Data 00:08:52.693 ============================ 00:08:52.693 Zone Append Size Limit: 0 00:08:52.693 00:08:52.693 00:08:52.693 Active Namespaces 00:08:52.693 ================= 00:08:52.693 Namespace ID:1 00:08:52.693 Error Recovery Timeout: Unlimited 00:08:52.693 Command Set Identifier: NVM (00h) 00:08:52.693 Deallocate: Supported 00:08:52.693 Deallocated/Unwritten Error: Supported 00:08:52.693 Deallocated Read Value: All 0x00 00:08:52.693 Deallocate in Write Zeroes: Not Supported 00:08:52.693 Deallocated Guard Field: 0xFFFF 00:08:52.693 Flush: Supported 00:08:52.693 Reservation: Not Supported 00:08:52.693 Namespace Sharing Capabilities: Private 00:08:52.693 Size (in LBAs): 1310720 (5GiB) 00:08:52.693 Capacity (in LBAs): 1310720 (5GiB) 00:08:52.693 Utilization (in LBAs): 1310720 (5GiB) 00:08:52.693 Thin Provisioning: Not Supported 00:08:52.693 Per-NS Atomic Units: No 00:08:52.693 Maximum Single Source Range Length: 128 00:08:52.693 Maximum Copy Length: 128 00:08:52.693 Maximum Source Range Count: 128 00:08:52.693 NGUID/EUI64 Never Reused: No 00:08:52.693 Namespace Write Protected: No 00:08:52.693 Number of LBA Formats: 8 00:08:52.693 Current LBA Format: LBA Format #04 00:08:52.693 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.693 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.693 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.693 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.693 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.693 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.693 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.693 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.693 00:08:52.693 NVM Specific Namespace Data 00:08:52.693 =========================== 00:08:52.693 Logical Block Storage Tag Mask: 0 00:08:52.693 Protection Information Capabilities: 00:08:52.693 16b Guard Protection Information Storage Tag Support: No 00:08:52.693 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.693 Storage Tag Check Read Support: No 00:08:52.693 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.693 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.693 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.693 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.693 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.693 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.693 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.693 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.693 10:45:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:52.693 10:45:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:52.953 ===================================================== 00:08:52.953 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.953 ===================================================== 00:08:52.953 Controller Capabilities/Features 00:08:52.953 ================================ 00:08:52.953 Vendor ID: 1b36 00:08:52.953 Subsystem Vendor ID: 1af4 00:08:52.953 Serial Number: 12342 00:08:52.953 Model Number: QEMU NVMe Ctrl 00:08:52.953 Firmware Version: 8.0.0 00:08:52.953 Recommended Arb Burst: 6 00:08:52.953 IEEE OUI Identifier: 00 54 52 00:08:52.953 Multi-path I/O 00:08:52.953 May have multiple subsystem ports: No 00:08:52.953 May have multiple controllers: No 00:08:52.953 Associated with SR-IOV VF: No 00:08:52.953 Max Data Transfer Size: 524288 00:08:52.953 Max Number of Namespaces: 256 00:08:52.953 Max Number of I/O Queues: 64 00:08:52.953 NVMe Specification Version (VS): 1.4 00:08:52.953 NVMe Specification Version (Identify): 1.4 00:08:52.953 Maximum Queue Entries: 2048 00:08:52.953 Contiguous Queues Required: Yes 00:08:52.953 Arbitration Mechanisms Supported 00:08:52.953 Weighted Round Robin: Not Supported 00:08:52.953 Vendor Specific: Not Supported 00:08:52.953 Reset Timeout: 7500 ms 00:08:52.953 Doorbell Stride: 4 bytes 00:08:52.953 NVM Subsystem Reset: Not Supported 00:08:52.953 Command Sets Supported 00:08:52.953 NVM Command Set: Supported 00:08:52.953 Boot Partition: Not Supported 00:08:52.953 Memory Page Size Minimum: 4096 bytes 00:08:52.953 Memory Page Size Maximum: 65536 bytes 00:08:52.953 Persistent Memory Region: Not Supported 00:08:52.953 Optional Asynchronous Events Supported 00:08:52.953 Namespace Attribute Notices: Supported 00:08:52.953 Firmware Activation Notices: Not Supported 00:08:52.953 ANA Change Notices: Not Supported 00:08:52.953 PLE Aggregate Log Change Notices: Not Supported 00:08:52.953 LBA Status Info Alert Notices: Not Supported 00:08:52.954 EGE Aggregate Log Change Notices: Not Supported 00:08:52.954 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.954 Zone Descriptor Change Notices: Not Supported 00:08:52.954 Discovery Log Change Notices: Not Supported 00:08:52.954 Controller Attributes 00:08:52.954 128-bit Host Identifier: Not Supported 00:08:52.954 Non-Operational Permissive Mode: Not Supported 00:08:52.954 NVM Sets: Not Supported 00:08:52.954 Read Recovery Levels: Not Supported 00:08:52.954 Endurance Groups: Not Supported 00:08:52.954 Predictable Latency Mode: Not Supported 00:08:52.954 Traffic Based Keep ALive: Not Supported 00:08:52.954 Namespace Granularity: Not Supported 00:08:52.954 SQ Associations: Not Supported 00:08:52.954 UUID List: Not Supported 00:08:52.954 Multi-Domain Subsystem: Not Supported 00:08:52.954 Fixed Capacity Management: Not Supported 00:08:52.954 Variable Capacity Management: Not Supported 00:08:52.954 Delete Endurance Group: Not Supported 00:08:52.954 Delete NVM Set: Not Supported 00:08:52.954 Extended LBA Formats Supported: Supported 00:08:52.954 Flexible Data Placement Supported: Not Supported 00:08:52.954 00:08:52.954 Controller Memory Buffer Support 00:08:52.954 ================================ 00:08:52.954 Supported: No 00:08:52.954 00:08:52.954 Persistent Memory Region Support 00:08:52.954 ================================ 00:08:52.954 Supported: No 00:08:52.954 00:08:52.954 Admin Command Set Attributes 00:08:52.954 ============================ 00:08:52.954 Security Send/Receive: Not Supported 00:08:52.954 Format NVM: Supported 00:08:52.954 Firmware Activate/Download: Not Supported 00:08:52.954 Namespace Management: Supported 00:08:52.954 Device Self-Test: Not Supported 00:08:52.954 Directives: Supported 00:08:52.954 NVMe-MI: Not Supported 00:08:52.954 Virtualization Management: Not Supported 00:08:52.954 Doorbell Buffer Config: Supported 00:08:52.954 Get LBA Status Capability: Not Supported 00:08:52.954 Command & Feature Lockdown Capability: Not Supported 00:08:52.954 Abort Command Limit: 4 00:08:52.954 Async Event Request Limit: 4 00:08:52.954 Number of Firmware Slots: N/A 00:08:52.954 Firmware Slot 1 Read-Only: N/A 00:08:52.954 Firmware Activation Without Reset: N/A 00:08:52.954 Multiple Update Detection Support: N/A 00:08:52.954 Firmware Update Granularity: No Information Provided 00:08:52.954 Per-Namespace SMART Log: Yes 00:08:52.954 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.954 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:52.954 Command Effects Log Page: Supported 00:08:52.954 Get Log Page Extended Data: Supported 00:08:52.954 Telemetry Log Pages: Not Supported 00:08:52.954 Persistent Event Log Pages: Not Supported 00:08:52.954 Supported Log Pages Log Page: May Support 00:08:52.954 Commands Supported & Effects Log Page: Not Supported 00:08:52.954 Feature Identifiers & Effects Log Page:May Support 00:08:52.954 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.954 Data Area 4 for Telemetry Log: Not Supported 00:08:52.954 Error Log Page Entries Supported: 1 00:08:52.954 Keep Alive: Not Supported 00:08:52.954 00:08:52.954 NVM Command Set Attributes 00:08:52.954 ========================== 00:08:52.954 Submission Queue Entry Size 00:08:52.954 Max: 64 00:08:52.954 Min: 64 00:08:52.954 Completion Queue Entry Size 00:08:52.954 Max: 16 00:08:52.954 Min: 16 00:08:52.954 Number of Namespaces: 256 00:08:52.954 Compare Command: Supported 00:08:52.954 Write Uncorrectable Command: Not Supported 00:08:52.954 Dataset Management Command: Supported 00:08:52.954 Write Zeroes Command: Supported 00:08:52.954 Set Features Save Field: Supported 00:08:52.954 Reservations: Not Supported 00:08:52.954 Timestamp: Supported 00:08:52.954 Copy: Supported 00:08:52.954 Volatile Write Cache: Present 00:08:52.954 Atomic Write Unit (Normal): 1 00:08:52.954 Atomic Write Unit (PFail): 1 00:08:52.954 Atomic Compare & Write Unit: 1 00:08:52.954 Fused Compare & Write: Not Supported 00:08:52.954 Scatter-Gather List 00:08:52.954 SGL Command Set: Supported 00:08:52.954 SGL Keyed: Not Supported 00:08:52.954 SGL Bit Bucket Descriptor: Not Supported 00:08:52.954 SGL Metadata Pointer: Not Supported 00:08:52.954 Oversized SGL: Not Supported 00:08:52.954 SGL Metadata Address: Not Supported 00:08:52.954 SGL Offset: Not Supported 00:08:52.954 Transport SGL Data Block: Not Supported 00:08:52.954 Replay Protected Memory Block: Not Supported 00:08:52.954 00:08:52.954 Firmware Slot Information 00:08:52.954 ========================= 00:08:52.954 Active slot: 1 00:08:52.954 Slot 1 Firmware Revision: 1.0 00:08:52.954 00:08:52.954 00:08:52.954 Commands Supported and Effects 00:08:52.954 ============================== 00:08:52.954 Admin Commands 00:08:52.954 -------------- 00:08:52.954 Delete I/O Submission Queue (00h): Supported 00:08:52.954 Create I/O Submission Queue (01h): Supported 00:08:52.954 Get Log Page (02h): Supported 00:08:52.954 Delete I/O Completion Queue (04h): Supported 00:08:52.954 Create I/O Completion Queue (05h): Supported 00:08:52.954 Identify (06h): Supported 00:08:52.954 Abort (08h): Supported 00:08:52.954 Set Features (09h): Supported 00:08:52.954 Get Features (0Ah): Supported 00:08:52.954 Asynchronous Event Request (0Ch): Supported 00:08:52.954 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.954 Directive Send (19h): Supported 00:08:52.954 Directive Receive (1Ah): Supported 00:08:52.954 Virtualization Management (1Ch): Supported 00:08:52.954 Doorbell Buffer Config (7Ch): Supported 00:08:52.954 Format NVM (80h): Supported LBA-Change 00:08:52.954 I/O Commands 00:08:52.954 ------------ 00:08:52.954 Flush (00h): Supported LBA-Change 00:08:52.954 Write (01h): Supported LBA-Change 00:08:52.954 Read (02h): Supported 00:08:52.954 Compare (05h): Supported 00:08:52.954 Write Zeroes (08h): Supported LBA-Change 00:08:52.954 Dataset Management (09h): Supported LBA-Change 00:08:52.954 Unknown (0Ch): Supported 00:08:52.954 Unknown (12h): Supported 00:08:52.954 Copy (19h): Supported LBA-Change 00:08:52.954 Unknown (1Dh): Supported LBA-Change 00:08:52.954 00:08:52.954 Error Log 00:08:52.954 ========= 00:08:52.954 00:08:52.954 Arbitration 00:08:52.954 =========== 00:08:52.954 Arbitration Burst: no limit 00:08:52.954 00:08:52.954 Power Management 00:08:52.954 ================ 00:08:52.954 Number of Power States: 1 00:08:52.954 Current Power State: Power State #0 00:08:52.954 Power State #0: 00:08:52.954 Max Power: 25.00 W 00:08:52.954 Non-Operational State: Operational 00:08:52.954 Entry Latency: 16 microseconds 00:08:52.954 Exit Latency: 4 microseconds 00:08:52.954 Relative Read Throughput: 0 00:08:52.954 Relative Read Latency: 0 00:08:52.954 Relative Write Throughput: 0 00:08:52.954 Relative Write Latency: 0 00:08:52.954 Idle Power: Not Reported 00:08:52.954 Active Power: Not Reported 00:08:52.954 Non-Operational Permissive Mode: Not Supported 00:08:52.954 00:08:52.954 Health Information 00:08:52.954 ================== 00:08:52.954 Critical Warnings: 00:08:52.954 Available Spare Space: OK 00:08:52.954 Temperature: OK 00:08:52.954 Device Reliability: OK 00:08:52.954 Read Only: No 00:08:52.954 Volatile Memory Backup: OK 00:08:52.954 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.954 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.954 Available Spare: 0% 00:08:52.954 Available Spare Threshold: 0% 00:08:52.954 Life Percentage Used: 0% 00:08:52.954 Data Units Read: 2422 00:08:52.954 Data Units Written: 2209 00:08:52.954 Host Read Commands: 115461 00:08:52.954 Host Write Commands: 113730 00:08:52.954 Controller Busy Time: 0 minutes 00:08:52.954 Power Cycles: 0 00:08:52.954 Power On Hours: 0 hours 00:08:52.954 Unsafe Shutdowns: 0 00:08:52.954 Unrecoverable Media Errors: 0 00:08:52.954 Lifetime Error Log Entries: 0 00:08:52.954 Warning Temperature Time: 0 minutes 00:08:52.954 Critical Temperature Time: 0 minutes 00:08:52.954 00:08:52.954 Number of Queues 00:08:52.954 ================ 00:08:52.954 Number of I/O Submission Queues: 64 00:08:52.954 Number of I/O Completion Queues: 64 00:08:52.954 00:08:52.954 ZNS Specific Controller Data 00:08:52.954 ============================ 00:08:52.954 Zone Append Size Limit: 0 00:08:52.954 00:08:52.954 00:08:52.954 Active Namespaces 00:08:52.954 ================= 00:08:52.954 Namespace ID:1 00:08:52.954 Error Recovery Timeout: Unlimited 00:08:52.954 Command Set Identifier: NVM (00h) 00:08:52.954 Deallocate: Supported 00:08:52.954 Deallocated/Unwritten Error: Supported 00:08:52.954 Deallocated Read Value: All 0x00 00:08:52.955 Deallocate in Write Zeroes: Not Supported 00:08:52.955 Deallocated Guard Field: 0xFFFF 00:08:52.955 Flush: Supported 00:08:52.955 Reservation: Not Supported 00:08:52.955 Namespace Sharing Capabilities: Private 00:08:52.955 Size (in LBAs): 1048576 (4GiB) 00:08:52.955 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.955 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.955 Thin Provisioning: Not Supported 00:08:52.955 Per-NS Atomic Units: No 00:08:52.955 Maximum Single Source Range Length: 128 00:08:52.955 Maximum Copy Length: 128 00:08:52.955 Maximum Source Range Count: 128 00:08:52.955 NGUID/EUI64 Never Reused: No 00:08:52.955 Namespace Write Protected: No 00:08:52.955 Number of LBA Formats: 8 00:08:52.955 Current LBA Format: LBA Format #04 00:08:52.955 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.955 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.955 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.955 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.955 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.955 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.955 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.955 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.955 00:08:52.955 NVM Specific Namespace Data 00:08:52.955 =========================== 00:08:52.955 Logical Block Storage Tag Mask: 0 00:08:52.955 Protection Information Capabilities: 00:08:52.955 16b Guard Protection Information Storage Tag Support: No 00:08:52.955 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.955 Storage Tag Check Read Support: No 00:08:52.955 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Namespace ID:2 00:08:52.955 Error Recovery Timeout: Unlimited 00:08:52.955 Command Set Identifier: NVM (00h) 00:08:52.955 Deallocate: Supported 00:08:52.955 Deallocated/Unwritten Error: Supported 00:08:52.955 Deallocated Read Value: All 0x00 00:08:52.955 Deallocate in Write Zeroes: Not Supported 00:08:52.955 Deallocated Guard Field: 0xFFFF 00:08:52.955 Flush: Supported 00:08:52.955 Reservation: Not Supported 00:08:52.955 Namespace Sharing Capabilities: Private 00:08:52.955 Size (in LBAs): 1048576 (4GiB) 00:08:52.955 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.955 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.955 Thin Provisioning: Not Supported 00:08:52.955 Per-NS Atomic Units: No 00:08:52.955 Maximum Single Source Range Length: 128 00:08:52.955 Maximum Copy Length: 128 00:08:52.955 Maximum Source Range Count: 128 00:08:52.955 NGUID/EUI64 Never Reused: No 00:08:52.955 Namespace Write Protected: No 00:08:52.955 Number of LBA Formats: 8 00:08:52.955 Current LBA Format: LBA Format #04 00:08:52.955 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.955 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.955 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.955 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.955 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.955 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.955 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.955 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.955 00:08:52.955 NVM Specific Namespace Data 00:08:52.955 =========================== 00:08:52.955 Logical Block Storage Tag Mask: 0 00:08:52.955 Protection Information Capabilities: 00:08:52.955 16b Guard Protection Information Storage Tag Support: No 00:08:52.955 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.955 Storage Tag Check Read Support: No 00:08:52.955 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Namespace ID:3 00:08:52.955 Error Recovery Timeout: Unlimited 00:08:52.955 Command Set Identifier: NVM (00h) 00:08:52.955 Deallocate: Supported 00:08:52.955 Deallocated/Unwritten Error: Supported 00:08:52.955 Deallocated Read Value: All 0x00 00:08:52.955 Deallocate in Write Zeroes: Not Supported 00:08:52.955 Deallocated Guard Field: 0xFFFF 00:08:52.955 Flush: Supported 00:08:52.955 Reservation: Not Supported 00:08:52.955 Namespace Sharing Capabilities: Private 00:08:52.955 Size (in LBAs): 1048576 (4GiB) 00:08:52.955 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.955 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.955 Thin Provisioning: Not Supported 00:08:52.955 Per-NS Atomic Units: No 00:08:52.955 Maximum Single Source Range Length: 128 00:08:52.955 Maximum Copy Length: 128 00:08:52.955 Maximum Source Range Count: 128 00:08:52.955 NGUID/EUI64 Never Reused: No 00:08:52.955 Namespace Write Protected: No 00:08:52.955 Number of LBA Formats: 8 00:08:52.955 Current LBA Format: LBA Format #04 00:08:52.955 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.955 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.955 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.955 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.955 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.955 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.955 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.955 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.955 00:08:52.955 NVM Specific Namespace Data 00:08:52.955 =========================== 00:08:52.955 Logical Block Storage Tag Mask: 0 00:08:52.955 Protection Information Capabilities: 00:08:52.955 16b Guard Protection Information Storage Tag Support: No 00:08:52.955 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.955 Storage Tag Check Read Support: No 00:08:52.955 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.955 10:45:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:52.955 10:45:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:53.215 ===================================================== 00:08:53.215 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.215 ===================================================== 00:08:53.215 Controller Capabilities/Features 00:08:53.215 ================================ 00:08:53.215 Vendor ID: 1b36 00:08:53.215 Subsystem Vendor ID: 1af4 00:08:53.215 Serial Number: 12343 00:08:53.215 Model Number: QEMU NVMe Ctrl 00:08:53.215 Firmware Version: 8.0.0 00:08:53.215 Recommended Arb Burst: 6 00:08:53.215 IEEE OUI Identifier: 00 54 52 00:08:53.215 Multi-path I/O 00:08:53.215 May have multiple subsystem ports: No 00:08:53.215 May have multiple controllers: Yes 00:08:53.215 Associated with SR-IOV VF: No 00:08:53.215 Max Data Transfer Size: 524288 00:08:53.215 Max Number of Namespaces: 256 00:08:53.215 Max Number of I/O Queues: 64 00:08:53.215 NVMe Specification Version (VS): 1.4 00:08:53.215 NVMe Specification Version (Identify): 1.4 00:08:53.215 Maximum Queue Entries: 2048 00:08:53.215 Contiguous Queues Required: Yes 00:08:53.215 Arbitration Mechanisms Supported 00:08:53.215 Weighted Round Robin: Not Supported 00:08:53.215 Vendor Specific: Not Supported 00:08:53.215 Reset Timeout: 7500 ms 00:08:53.215 Doorbell Stride: 4 bytes 00:08:53.216 NVM Subsystem Reset: Not Supported 00:08:53.216 Command Sets Supported 00:08:53.216 NVM Command Set: Supported 00:08:53.216 Boot Partition: Not Supported 00:08:53.216 Memory Page Size Minimum: 4096 bytes 00:08:53.216 Memory Page Size Maximum: 65536 bytes 00:08:53.216 Persistent Memory Region: Not Supported 00:08:53.216 Optional Asynchronous Events Supported 00:08:53.216 Namespace Attribute Notices: Supported 00:08:53.216 Firmware Activation Notices: Not Supported 00:08:53.216 ANA Change Notices: Not Supported 00:08:53.216 PLE Aggregate Log Change Notices: Not Supported 00:08:53.216 LBA Status Info Alert Notices: Not Supported 00:08:53.216 EGE Aggregate Log Change Notices: Not Supported 00:08:53.216 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.216 Zone Descriptor Change Notices: Not Supported 00:08:53.216 Discovery Log Change Notices: Not Supported 00:08:53.216 Controller Attributes 00:08:53.216 128-bit Host Identifier: Not Supported 00:08:53.216 Non-Operational Permissive Mode: Not Supported 00:08:53.216 NVM Sets: Not Supported 00:08:53.216 Read Recovery Levels: Not Supported 00:08:53.216 Endurance Groups: Supported 00:08:53.216 Predictable Latency Mode: Not Supported 00:08:53.216 Traffic Based Keep ALive: Not Supported 00:08:53.216 Namespace Granularity: Not Supported 00:08:53.216 SQ Associations: Not Supported 00:08:53.216 UUID List: Not Supported 00:08:53.216 Multi-Domain Subsystem: Not Supported 00:08:53.216 Fixed Capacity Management: Not Supported 00:08:53.216 Variable Capacity Management: Not Supported 00:08:53.216 Delete Endurance Group: Not Supported 00:08:53.216 Delete NVM Set: Not Supported 00:08:53.216 Extended LBA Formats Supported: Supported 00:08:53.216 Flexible Data Placement Supported: Supported 00:08:53.216 00:08:53.216 Controller Memory Buffer Support 00:08:53.216 ================================ 00:08:53.216 Supported: No 00:08:53.216 00:08:53.216 Persistent Memory Region Support 00:08:53.216 ================================ 00:08:53.216 Supported: No 00:08:53.216 00:08:53.216 Admin Command Set Attributes 00:08:53.216 ============================ 00:08:53.216 Security Send/Receive: Not Supported 00:08:53.216 Format NVM: Supported 00:08:53.216 Firmware Activate/Download: Not Supported 00:08:53.216 Namespace Management: Supported 00:08:53.216 Device Self-Test: Not Supported 00:08:53.216 Directives: Supported 00:08:53.216 NVMe-MI: Not Supported 00:08:53.216 Virtualization Management: Not Supported 00:08:53.216 Doorbell Buffer Config: Supported 00:08:53.216 Get LBA Status Capability: Not Supported 00:08:53.216 Command & Feature Lockdown Capability: Not Supported 00:08:53.216 Abort Command Limit: 4 00:08:53.216 Async Event Request Limit: 4 00:08:53.216 Number of Firmware Slots: N/A 00:08:53.216 Firmware Slot 1 Read-Only: N/A 00:08:53.216 Firmware Activation Without Reset: N/A 00:08:53.216 Multiple Update Detection Support: N/A 00:08:53.216 Firmware Update Granularity: No Information Provided 00:08:53.216 Per-Namespace SMART Log: Yes 00:08:53.216 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.216 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:53.216 Command Effects Log Page: Supported 00:08:53.216 Get Log Page Extended Data: Supported 00:08:53.216 Telemetry Log Pages: Not Supported 00:08:53.216 Persistent Event Log Pages: Not Supported 00:08:53.216 Supported Log Pages Log Page: May Support 00:08:53.216 Commands Supported & Effects Log Page: Not Supported 00:08:53.216 Feature Identifiers & Effects Log Page:May Support 00:08:53.216 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.216 Data Area 4 for Telemetry Log: Not Supported 00:08:53.216 Error Log Page Entries Supported: 1 00:08:53.216 Keep Alive: Not Supported 00:08:53.216 00:08:53.216 NVM Command Set Attributes 00:08:53.216 ========================== 00:08:53.216 Submission Queue Entry Size 00:08:53.216 Max: 64 00:08:53.216 Min: 64 00:08:53.216 Completion Queue Entry Size 00:08:53.216 Max: 16 00:08:53.216 Min: 16 00:08:53.216 Number of Namespaces: 256 00:08:53.216 Compare Command: Supported 00:08:53.216 Write Uncorrectable Command: Not Supported 00:08:53.216 Dataset Management Command: Supported 00:08:53.216 Write Zeroes Command: Supported 00:08:53.216 Set Features Save Field: Supported 00:08:53.216 Reservations: Not Supported 00:08:53.216 Timestamp: Supported 00:08:53.216 Copy: Supported 00:08:53.216 Volatile Write Cache: Present 00:08:53.216 Atomic Write Unit (Normal): 1 00:08:53.216 Atomic Write Unit (PFail): 1 00:08:53.216 Atomic Compare & Write Unit: 1 00:08:53.216 Fused Compare & Write: Not Supported 00:08:53.216 Scatter-Gather List 00:08:53.216 SGL Command Set: Supported 00:08:53.216 SGL Keyed: Not Supported 00:08:53.216 SGL Bit Bucket Descriptor: Not Supported 00:08:53.216 SGL Metadata Pointer: Not Supported 00:08:53.216 Oversized SGL: Not Supported 00:08:53.216 SGL Metadata Address: Not Supported 00:08:53.216 SGL Offset: Not Supported 00:08:53.216 Transport SGL Data Block: Not Supported 00:08:53.216 Replay Protected Memory Block: Not Supported 00:08:53.216 00:08:53.216 Firmware Slot Information 00:08:53.216 ========================= 00:08:53.216 Active slot: 1 00:08:53.216 Slot 1 Firmware Revision: 1.0 00:08:53.216 00:08:53.216 00:08:53.216 Commands Supported and Effects 00:08:53.216 ============================== 00:08:53.216 Admin Commands 00:08:53.216 -------------- 00:08:53.216 Delete I/O Submission Queue (00h): Supported 00:08:53.216 Create I/O Submission Queue (01h): Supported 00:08:53.216 Get Log Page (02h): Supported 00:08:53.216 Delete I/O Completion Queue (04h): Supported 00:08:53.216 Create I/O Completion Queue (05h): Supported 00:08:53.216 Identify (06h): Supported 00:08:53.216 Abort (08h): Supported 00:08:53.216 Set Features (09h): Supported 00:08:53.216 Get Features (0Ah): Supported 00:08:53.216 Asynchronous Event Request (0Ch): Supported 00:08:53.216 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.216 Directive Send (19h): Supported 00:08:53.216 Directive Receive (1Ah): Supported 00:08:53.216 Virtualization Management (1Ch): Supported 00:08:53.216 Doorbell Buffer Config (7Ch): Supported 00:08:53.216 Format NVM (80h): Supported LBA-Change 00:08:53.216 I/O Commands 00:08:53.216 ------------ 00:08:53.216 Flush (00h): Supported LBA-Change 00:08:53.216 Write (01h): Supported LBA-Change 00:08:53.216 Read (02h): Supported 00:08:53.216 Compare (05h): Supported 00:08:53.216 Write Zeroes (08h): Supported LBA-Change 00:08:53.216 Dataset Management (09h): Supported LBA-Change 00:08:53.216 Unknown (0Ch): Supported 00:08:53.216 Unknown (12h): Supported 00:08:53.216 Copy (19h): Supported LBA-Change 00:08:53.216 Unknown (1Dh): Supported LBA-Change 00:08:53.216 00:08:53.216 Error Log 00:08:53.216 ========= 00:08:53.216 00:08:53.216 Arbitration 00:08:53.216 =========== 00:08:53.216 Arbitration Burst: no limit 00:08:53.216 00:08:53.216 Power Management 00:08:53.216 ================ 00:08:53.216 Number of Power States: 1 00:08:53.216 Current Power State: Power State #0 00:08:53.216 Power State #0: 00:08:53.216 Max Power: 25.00 W 00:08:53.216 Non-Operational State: Operational 00:08:53.216 Entry Latency: 16 microseconds 00:08:53.216 Exit Latency: 4 microseconds 00:08:53.216 Relative Read Throughput: 0 00:08:53.216 Relative Read Latency: 0 00:08:53.216 Relative Write Throughput: 0 00:08:53.216 Relative Write Latency: 0 00:08:53.216 Idle Power: Not Reported 00:08:53.216 Active Power: Not Reported 00:08:53.216 Non-Operational Permissive Mode: Not Supported 00:08:53.216 00:08:53.216 Health Information 00:08:53.216 ================== 00:08:53.216 Critical Warnings: 00:08:53.216 Available Spare Space: OK 00:08:53.216 Temperature: OK 00:08:53.216 Device Reliability: OK 00:08:53.216 Read Only: No 00:08:53.216 Volatile Memory Backup: OK 00:08:53.216 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.216 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.216 Available Spare: 0% 00:08:53.216 Available Spare Threshold: 0% 00:08:53.216 Life Percentage Used: 0% 00:08:53.216 Data Units Read: 879 00:08:53.216 Data Units Written: 808 00:08:53.216 Host Read Commands: 39045 00:08:53.216 Host Write Commands: 38468 00:08:53.216 Controller Busy Time: 0 minutes 00:08:53.216 Power Cycles: 0 00:08:53.216 Power On Hours: 0 hours 00:08:53.216 Unsafe Shutdowns: 0 00:08:53.216 Unrecoverable Media Errors: 0 00:08:53.216 Lifetime Error Log Entries: 0 00:08:53.216 Warning Temperature Time: 0 minutes 00:08:53.216 Critical Temperature Time: 0 minutes 00:08:53.216 00:08:53.216 Number of Queues 00:08:53.217 ================ 00:08:53.217 Number of I/O Submission Queues: 64 00:08:53.217 Number of I/O Completion Queues: 64 00:08:53.217 00:08:53.217 ZNS Specific Controller Data 00:08:53.217 ============================ 00:08:53.217 Zone Append Size Limit: 0 00:08:53.217 00:08:53.217 00:08:53.217 Active Namespaces 00:08:53.217 ================= 00:08:53.217 Namespace ID:1 00:08:53.217 Error Recovery Timeout: Unlimited 00:08:53.217 Command Set Identifier: NVM (00h) 00:08:53.217 Deallocate: Supported 00:08:53.217 Deallocated/Unwritten Error: Supported 00:08:53.217 Deallocated Read Value: All 0x00 00:08:53.217 Deallocate in Write Zeroes: Not Supported 00:08:53.217 Deallocated Guard Field: 0xFFFF 00:08:53.217 Flush: Supported 00:08:53.217 Reservation: Not Supported 00:08:53.217 Namespace Sharing Capabilities: Multiple Controllers 00:08:53.217 Size (in LBAs): 262144 (1GiB) 00:08:53.217 Capacity (in LBAs): 262144 (1GiB) 00:08:53.217 Utilization (in LBAs): 262144 (1GiB) 00:08:53.217 Thin Provisioning: Not Supported 00:08:53.217 Per-NS Atomic Units: No 00:08:53.217 Maximum Single Source Range Length: 128 00:08:53.217 Maximum Copy Length: 128 00:08:53.217 Maximum Source Range Count: 128 00:08:53.217 NGUID/EUI64 Never Reused: No 00:08:53.217 Namespace Write Protected: No 00:08:53.217 Endurance group ID: 1 00:08:53.217 Number of LBA Formats: 8 00:08:53.217 Current LBA Format: LBA Format #04 00:08:53.217 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.217 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.217 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.217 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.217 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.217 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.217 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.217 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.217 00:08:53.217 Get Feature FDP: 00:08:53.217 ================ 00:08:53.217 Enabled: Yes 00:08:53.217 FDP configuration index: 0 00:08:53.217 00:08:53.217 FDP configurations log page 00:08:53.217 =========================== 00:08:53.217 Number of FDP configurations: 1 00:08:53.217 Version: 0 00:08:53.217 Size: 112 00:08:53.217 FDP Configuration Descriptor: 0 00:08:53.217 Descriptor Size: 96 00:08:53.217 Reclaim Group Identifier format: 2 00:08:53.217 FDP Volatile Write Cache: Not Present 00:08:53.217 FDP Configuration: Valid 00:08:53.217 Vendor Specific Size: 0 00:08:53.217 Number of Reclaim Groups: 2 00:08:53.217 Number of Recalim Unit Handles: 8 00:08:53.217 Max Placement Identifiers: 128 00:08:53.217 Number of Namespaces Suppprted: 256 00:08:53.217 Reclaim unit Nominal Size: 6000000 bytes 00:08:53.217 Estimated Reclaim Unit Time Limit: Not Reported 00:08:53.217 RUH Desc #000: RUH Type: Initially Isolated 00:08:53.217 RUH Desc #001: RUH Type: Initially Isolated 00:08:53.217 RUH Desc #002: RUH Type: Initially Isolated 00:08:53.217 RUH Desc #003: RUH Type: Initially Isolated 00:08:53.217 RUH Desc #004: RUH Type: Initially Isolated 00:08:53.217 RUH Desc #005: RUH Type: Initially Isolated 00:08:53.217 RUH Desc #006: RUH Type: Initially Isolated 00:08:53.217 RUH Desc #007: RUH Type: Initially Isolated 00:08:53.217 00:08:53.217 FDP reclaim unit handle usage log page 00:08:53.476 ====================================== 00:08:53.476 Number of Reclaim Unit Handles: 8 00:08:53.476 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:53.476 RUH Usage Desc #001: RUH Attributes: Unused 00:08:53.476 RUH Usage Desc #002: RUH Attributes: Unused 00:08:53.476 RUH Usage Desc #003: RUH Attributes: Unused 00:08:53.476 RUH Usage Desc #004: RUH Attributes: Unused 00:08:53.476 RUH Usage Desc #005: RUH Attributes: Unused 00:08:53.476 RUH Usage Desc #006: RUH Attributes: Unused 00:08:53.476 RUH Usage Desc #007: RUH Attributes: Unused 00:08:53.476 00:08:53.476 FDP statistics log page 00:08:53.476 ======================= 00:08:53.476 Host bytes with metadata written: 525180928 00:08:53.476 Media bytes with metadata written: 525238272 00:08:53.476 Media bytes erased: 0 00:08:53.476 00:08:53.476 FDP events log page 00:08:53.476 =================== 00:08:53.476 Number of FDP events: 0 00:08:53.476 00:08:53.476 NVM Specific Namespace Data 00:08:53.476 =========================== 00:08:53.476 Logical Block Storage Tag Mask: 0 00:08:53.476 Protection Information Capabilities: 00:08:53.476 16b Guard Protection Information Storage Tag Support: No 00:08:53.476 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.476 Storage Tag Check Read Support: No 00:08:53.476 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.476 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.476 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.476 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.476 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.476 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.476 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.476 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.476 ************************************ 00:08:53.476 END TEST nvme_identify 00:08:53.476 ************************************ 00:08:53.476 00:08:53.476 real 0m1.725s 00:08:53.476 user 0m0.644s 00:08:53.476 sys 0m0.884s 00:08:53.476 10:45:42 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.476 10:45:42 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:53.476 10:45:42 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:53.476 10:45:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.476 10:45:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.476 10:45:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.476 ************************************ 00:08:53.476 START TEST nvme_perf 00:08:53.476 ************************************ 00:08:53.476 10:45:42 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:53.476 10:45:42 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:54.855 Initializing NVMe Controllers 00:08:54.855 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:54.855 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:54.855 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.855 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.855 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:54.855 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:54.855 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:54.855 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:54.855 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:54.855 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:54.855 Initialization complete. Launching workers. 00:08:54.855 ======================================================== 00:08:54.855 Latency(us) 00:08:54.855 Device Information : IOPS MiB/s Average min max 00:08:54.855 PCIE (0000:00:10.0) NSID 1 from core 0: 13509.91 158.32 9494.94 7729.44 47837.56 00:08:54.855 PCIE (0000:00:11.0) NSID 1 from core 0: 13509.91 158.32 9480.23 7716.96 46249.37 00:08:54.855 PCIE (0000:00:13.0) NSID 1 from core 0: 13509.91 158.32 9464.07 7747.99 45196.43 00:08:54.855 PCIE (0000:00:12.0) NSID 1 from core 0: 13509.91 158.32 9447.67 7794.93 43508.12 00:08:54.855 PCIE (0000:00:12.0) NSID 2 from core 0: 13509.91 158.32 9431.68 7864.39 41875.83 00:08:54.855 PCIE (0000:00:12.0) NSID 3 from core 0: 13509.91 158.32 9414.26 7855.85 40185.80 00:08:54.855 ======================================================== 00:08:54.855 Total : 81059.44 949.92 9455.48 7716.96 47837.56 00:08:54.855 00:08:54.855 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:54.855 ================================================================================= 00:08:54.855 1.00000% : 8001.182us 00:08:54.855 10.00000% : 8264.379us 00:08:54.855 25.00000% : 8632.855us 00:08:54.855 50.00000% : 9159.248us 00:08:54.855 75.00000% : 9738.281us 00:08:54.855 90.00000% : 10106.757us 00:08:54.855 95.00000% : 10317.314us 00:08:54.855 98.00000% : 11475.380us 00:08:54.855 99.00000% : 16528.758us 00:08:54.855 99.50000% : 37689.780us 00:08:54.855 99.90000% : 47375.422us 00:08:54.855 99.99000% : 47796.537us 00:08:54.855 99.99900% : 48007.094us 00:08:54.855 99.99990% : 48007.094us 00:08:54.855 99.99999% : 48007.094us 00:08:54.855 00:08:54.855 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:54.855 ================================================================================= 00:08:54.855 1.00000% : 8053.822us 00:08:54.855 10.00000% : 8317.018us 00:08:54.855 25.00000% : 8580.215us 00:08:54.855 50.00000% : 9211.888us 00:08:54.855 75.00000% : 9738.281us 00:08:54.855 90.00000% : 10054.117us 00:08:54.855 95.00000% : 10317.314us 00:08:54.855 98.00000% : 11370.101us 00:08:54.855 99.00000% : 15370.692us 00:08:54.855 99.50000% : 37058.108us 00:08:54.855 99.90000% : 45901.520us 00:08:54.855 99.99000% : 46322.635us 00:08:54.855 99.99900% : 46322.635us 00:08:54.855 99.99990% : 46322.635us 00:08:54.855 99.99999% : 46322.635us 00:08:54.855 00:08:54.855 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:54.855 ================================================================================= 00:08:54.855 1.00000% : 8053.822us 00:08:54.855 10.00000% : 8317.018us 00:08:54.855 25.00000% : 8632.855us 00:08:54.855 50.00000% : 9211.888us 00:08:54.855 75.00000% : 9738.281us 00:08:54.855 90.00000% : 10054.117us 00:08:54.855 95.00000% : 10264.675us 00:08:54.855 98.00000% : 11106.904us 00:08:54.855 99.00000% : 14633.741us 00:08:54.855 99.50000% : 36426.435us 00:08:54.855 99.90000% : 44848.733us 00:08:54.855 99.99000% : 45269.847us 00:08:54.855 99.99900% : 45269.847us 00:08:54.855 99.99990% : 45269.847us 00:08:54.855 99.99999% : 45269.847us 00:08:54.855 00:08:54.855 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:54.855 ================================================================================= 00:08:54.855 1.00000% : 8106.461us 00:08:54.855 10.00000% : 8317.018us 00:08:54.855 25.00000% : 8632.855us 00:08:54.855 50.00000% : 9211.888us 00:08:54.855 75.00000% : 9738.281us 00:08:54.855 90.00000% : 10054.117us 00:08:54.855 95.00000% : 10317.314us 00:08:54.855 98.00000% : 11159.544us 00:08:54.855 99.00000% : 14212.627us 00:08:54.855 99.50000% : 34952.533us 00:08:54.855 99.90000% : 43164.273us 00:08:54.855 99.99000% : 43585.388us 00:08:54.855 99.99900% : 43585.388us 00:08:54.855 99.99990% : 43585.388us 00:08:54.855 99.99999% : 43585.388us 00:08:54.855 00:08:54.855 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:54.855 ================================================================================= 00:08:54.855 1.00000% : 8106.461us 00:08:54.855 10.00000% : 8317.018us 00:08:54.855 25.00000% : 8632.855us 00:08:54.855 50.00000% : 9211.888us 00:08:54.855 75.00000% : 9738.281us 00:08:54.855 90.00000% : 10054.117us 00:08:54.855 95.00000% : 10317.314us 00:08:54.855 98.00000% : 11212.183us 00:08:54.855 99.00000% : 13791.512us 00:08:54.855 99.50000% : 33478.631us 00:08:54.855 99.90000% : 41479.814us 00:08:54.855 99.99000% : 41900.929us 00:08:54.855 99.99900% : 41900.929us 00:08:54.855 99.99990% : 41900.929us 00:08:54.855 99.99999% : 41900.929us 00:08:54.855 00:08:54.855 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:54.855 ================================================================================= 00:08:54.855 1.00000% : 8106.461us 00:08:54.855 10.00000% : 8317.018us 00:08:54.855 25.00000% : 8632.855us 00:08:54.855 50.00000% : 9211.888us 00:08:54.855 75.00000% : 9738.281us 00:08:54.855 90.00000% : 10054.117us 00:08:54.855 95.00000% : 10317.314us 00:08:54.855 98.00000% : 11264.822us 00:08:54.855 99.00000% : 13580.954us 00:08:54.855 99.50000% : 31794.172us 00:08:54.855 99.90000% : 39795.354us 00:08:54.855 99.99000% : 40216.469us 00:08:54.855 99.99900% : 40216.469us 00:08:54.855 99.99990% : 40216.469us 00:08:54.855 99.99999% : 40216.469us 00:08:54.855 00:08:54.855 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:54.855 ============================================================================== 00:08:54.855 Range in us Cumulative IO count 00:08:54.855 7685.346 - 7737.986: 0.0147% ( 2) 00:08:54.855 7737.986 - 7790.625: 0.0369% ( 3) 00:08:54.855 7790.625 - 7843.264: 0.1179% ( 11) 00:08:54.855 7843.264 - 7895.904: 0.2432% ( 17) 00:08:54.855 7895.904 - 7948.543: 0.6117% ( 50) 00:08:54.855 7948.543 - 8001.182: 1.2751% ( 90) 00:08:54.855 8001.182 - 8053.822: 2.3069% ( 140) 00:08:54.855 8053.822 - 8106.461: 3.9136% ( 218) 00:08:54.855 8106.461 - 8159.100: 5.8667% ( 265) 00:08:54.855 8159.100 - 8211.740: 8.0041% ( 290) 00:08:54.855 8211.740 - 8264.379: 10.2963% ( 311) 00:08:54.855 8264.379 - 8317.018: 12.5737% ( 309) 00:08:54.855 8317.018 - 8369.658: 14.8806% ( 313) 00:08:54.855 8369.658 - 8422.297: 17.1654% ( 310) 00:08:54.855 8422.297 - 8474.937: 19.5755% ( 327) 00:08:54.855 8474.937 - 8527.576: 21.9045% ( 316) 00:08:54.855 8527.576 - 8580.215: 24.2188% ( 314) 00:08:54.855 8580.215 - 8632.855: 26.6583% ( 331) 00:08:54.855 8632.855 - 8685.494: 28.9873% ( 316) 00:08:54.855 8685.494 - 8738.133: 31.3311% ( 318) 00:08:54.855 8738.133 - 8790.773: 33.6380% ( 313) 00:08:54.855 8790.773 - 8843.412: 36.1807% ( 345) 00:08:54.855 8843.412 - 8896.051: 38.7603% ( 350) 00:08:54.855 8896.051 - 8948.691: 41.3031% ( 345) 00:08:54.855 8948.691 - 9001.330: 43.7942% ( 338) 00:08:54.855 9001.330 - 9053.969: 46.3665% ( 349) 00:08:54.855 9053.969 - 9106.609: 49.0419% ( 363) 00:08:54.855 9106.609 - 9159.248: 51.4225% ( 323) 00:08:54.855 9159.248 - 9211.888: 53.5820% ( 293) 00:08:54.855 9211.888 - 9264.527: 55.7783% ( 298) 00:08:54.855 9264.527 - 9317.166: 57.9009% ( 288) 00:08:54.855 9317.166 - 9369.806: 59.9425% ( 277) 00:08:54.855 9369.806 - 9422.445: 62.0725% ( 289) 00:08:54.855 9422.445 - 9475.084: 64.3278% ( 306) 00:08:54.855 9475.084 - 9527.724: 66.6274% ( 312) 00:08:54.855 9527.724 - 9580.363: 69.0743% ( 332) 00:08:54.855 9580.363 - 9633.002: 71.4402% ( 321) 00:08:54.855 9633.002 - 9685.642: 73.8723% ( 330) 00:08:54.855 9685.642 - 9738.281: 76.3267% ( 333) 00:08:54.855 9738.281 - 9790.920: 78.7883% ( 334) 00:08:54.855 9790.920 - 9843.560: 81.1984% ( 327) 00:08:54.855 9843.560 - 9896.199: 83.4906% ( 311) 00:08:54.855 9896.199 - 9948.839: 85.7017% ( 300) 00:08:54.855 9948.839 - 10001.478: 87.8538% ( 292) 00:08:54.855 10001.478 - 10054.117: 89.7553% ( 258) 00:08:54.855 10054.117 - 10106.757: 91.4136% ( 225) 00:08:54.855 10106.757 - 10159.396: 92.7255% ( 178) 00:08:54.855 10159.396 - 10212.035: 93.7353% ( 137) 00:08:54.855 10212.035 - 10264.675: 94.4502% ( 97) 00:08:54.855 10264.675 - 10317.314: 95.0767% ( 85) 00:08:54.855 10317.314 - 10369.953: 95.5631% ( 66) 00:08:54.855 10369.953 - 10422.593: 96.0495% ( 66) 00:08:54.856 10422.593 - 10475.232: 96.4180% ( 50) 00:08:54.856 10475.232 - 10527.871: 96.7423% ( 44) 00:08:54.856 10527.871 - 10580.511: 97.0740% ( 45) 00:08:54.856 10580.511 - 10633.150: 97.2435% ( 23) 00:08:54.856 10633.150 - 10685.790: 97.3909% ( 20) 00:08:54.856 10685.790 - 10738.429: 97.5162% ( 17) 00:08:54.856 10738.429 - 10791.068: 97.6047% ( 12) 00:08:54.856 10791.068 - 10843.708: 97.6489% ( 6) 00:08:54.856 10843.708 - 10896.347: 97.6931% ( 6) 00:08:54.856 10896.347 - 10948.986: 97.7078% ( 2) 00:08:54.856 10948.986 - 11001.626: 97.7521% ( 6) 00:08:54.856 11001.626 - 11054.265: 97.7815% ( 4) 00:08:54.856 11054.265 - 11106.904: 97.8184% ( 5) 00:08:54.856 11106.904 - 11159.544: 97.8479% ( 4) 00:08:54.856 11159.544 - 11212.183: 97.8921% ( 6) 00:08:54.856 11212.183 - 11264.822: 97.9142% ( 3) 00:08:54.856 11264.822 - 11317.462: 97.9437% ( 4) 00:08:54.856 11317.462 - 11370.101: 97.9732% ( 4) 00:08:54.856 11370.101 - 11422.741: 97.9879% ( 2) 00:08:54.856 11422.741 - 11475.380: 98.0174% ( 4) 00:08:54.856 11475.380 - 11528.019: 98.0248% ( 1) 00:08:54.856 11528.019 - 11580.659: 98.0395% ( 2) 00:08:54.856 11580.659 - 11633.298: 98.0542% ( 2) 00:08:54.856 11633.298 - 11685.937: 98.0837% ( 4) 00:08:54.856 11685.937 - 11738.577: 98.1132% ( 4) 00:08:54.856 11738.577 - 11791.216: 98.1427% ( 4) 00:08:54.856 11791.216 - 11843.855: 98.1648% ( 3) 00:08:54.856 11843.855 - 11896.495: 98.1869% ( 3) 00:08:54.856 11896.495 - 11949.134: 98.1943% ( 1) 00:08:54.856 11949.134 - 12001.773: 98.2017% ( 1) 00:08:54.856 12001.773 - 12054.413: 98.2238% ( 3) 00:08:54.856 12107.052 - 12159.692: 98.2459% ( 3) 00:08:54.856 12159.692 - 12212.331: 98.2532% ( 1) 00:08:54.856 12212.331 - 12264.970: 98.2680% ( 2) 00:08:54.856 12264.970 - 12317.610: 98.2827% ( 2) 00:08:54.856 12317.610 - 12370.249: 98.2901% ( 1) 00:08:54.856 12370.249 - 12422.888: 98.3048% ( 2) 00:08:54.856 12422.888 - 12475.528: 98.3122% ( 1) 00:08:54.856 12475.528 - 12528.167: 98.3269% ( 2) 00:08:54.856 12528.167 - 12580.806: 98.3343% ( 1) 00:08:54.856 12580.806 - 12633.446: 98.3491% ( 2) 00:08:54.856 12633.446 - 12686.085: 98.3638% ( 2) 00:08:54.856 12686.085 - 12738.724: 98.3712% ( 1) 00:08:54.856 12738.724 - 12791.364: 98.3785% ( 1) 00:08:54.856 12791.364 - 12844.003: 98.3933% ( 2) 00:08:54.856 12844.003 - 12896.643: 98.4080% ( 2) 00:08:54.856 12896.643 - 12949.282: 98.4154% ( 1) 00:08:54.856 12949.282 - 13001.921: 98.4301% ( 2) 00:08:54.856 13001.921 - 13054.561: 98.4449% ( 2) 00:08:54.856 13054.561 - 13107.200: 98.4522% ( 1) 00:08:54.856 13107.200 - 13159.839: 98.4744% ( 3) 00:08:54.856 13159.839 - 13212.479: 98.4817% ( 1) 00:08:54.856 13212.479 - 13265.118: 98.4891% ( 1) 00:08:54.856 13265.118 - 13317.757: 98.5038% ( 2) 00:08:54.856 13317.757 - 13370.397: 98.5112% ( 1) 00:08:54.856 13370.397 - 13423.036: 98.5333% ( 3) 00:08:54.856 13423.036 - 13475.676: 98.5407% ( 1) 00:08:54.856 13475.676 - 13580.954: 98.5628% ( 3) 00:08:54.856 13580.954 - 13686.233: 98.5849% ( 3) 00:08:54.856 14633.741 - 14739.020: 98.5996% ( 2) 00:08:54.856 14739.020 - 14844.299: 98.6291% ( 4) 00:08:54.856 14844.299 - 14949.578: 98.6586% ( 4) 00:08:54.856 14949.578 - 15054.856: 98.6733% ( 2) 00:08:54.856 15054.856 - 15160.135: 98.6881% ( 2) 00:08:54.856 15160.135 - 15265.414: 98.7176% ( 4) 00:08:54.856 15265.414 - 15370.692: 98.7397% ( 3) 00:08:54.856 15370.692 - 15475.971: 98.7618% ( 3) 00:08:54.856 15475.971 - 15581.250: 98.7913% ( 4) 00:08:54.856 15581.250 - 15686.529: 98.8134% ( 3) 00:08:54.856 15686.529 - 15791.807: 98.8355% ( 3) 00:08:54.856 15791.807 - 15897.086: 98.8650% ( 4) 00:08:54.856 15897.086 - 16002.365: 98.8945% ( 4) 00:08:54.856 16002.365 - 16107.643: 98.9166% ( 3) 00:08:54.856 16107.643 - 16212.922: 98.9387% ( 3) 00:08:54.856 16212.922 - 16318.201: 98.9608% ( 3) 00:08:54.856 16318.201 - 16423.480: 98.9903% ( 4) 00:08:54.856 16423.480 - 16528.758: 99.0124% ( 3) 00:08:54.856 16528.758 - 16634.037: 99.0419% ( 4) 00:08:54.856 16634.037 - 16739.316: 99.0566% ( 2) 00:08:54.856 35373.648 - 35584.206: 99.0935% ( 5) 00:08:54.856 35584.206 - 35794.763: 99.1303% ( 5) 00:08:54.856 35794.763 - 36005.320: 99.1745% ( 6) 00:08:54.856 36005.320 - 36215.878: 99.2261% ( 7) 00:08:54.856 36215.878 - 36426.435: 99.2703% ( 6) 00:08:54.856 36426.435 - 36636.993: 99.3146% ( 6) 00:08:54.856 36636.993 - 36847.550: 99.3588% ( 6) 00:08:54.856 36847.550 - 37058.108: 99.4030% ( 6) 00:08:54.856 37058.108 - 37268.665: 99.4472% ( 6) 00:08:54.856 37268.665 - 37479.222: 99.4915% ( 6) 00:08:54.856 37479.222 - 37689.780: 99.5283% ( 5) 00:08:54.856 45269.847 - 45480.405: 99.5578% ( 4) 00:08:54.856 45480.405 - 45690.962: 99.5946% ( 5) 00:08:54.856 45690.962 - 45901.520: 99.6315% ( 5) 00:08:54.856 45901.520 - 46112.077: 99.6683% ( 5) 00:08:54.856 46112.077 - 46322.635: 99.7199% ( 7) 00:08:54.856 46322.635 - 46533.192: 99.7568% ( 5) 00:08:54.856 46533.192 - 46743.749: 99.7863% ( 4) 00:08:54.856 46743.749 - 46954.307: 99.8379% ( 7) 00:08:54.856 46954.307 - 47164.864: 99.8821% ( 6) 00:08:54.856 47164.864 - 47375.422: 99.9189% ( 5) 00:08:54.856 47375.422 - 47585.979: 99.9631% ( 6) 00:08:54.856 47585.979 - 47796.537: 99.9926% ( 4) 00:08:54.856 47796.537 - 48007.094: 100.0000% ( 1) 00:08:54.856 00:08:54.856 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:54.856 ============================================================================== 00:08:54.856 Range in us Cumulative IO count 00:08:54.856 7685.346 - 7737.986: 0.0147% ( 2) 00:08:54.856 7737.986 - 7790.625: 0.0590% ( 6) 00:08:54.856 7790.625 - 7843.264: 0.1032% ( 6) 00:08:54.856 7843.264 - 7895.904: 0.1695% ( 9) 00:08:54.856 7895.904 - 7948.543: 0.3538% ( 25) 00:08:54.856 7948.543 - 8001.182: 0.6117% ( 35) 00:08:54.856 8001.182 - 8053.822: 1.1498% ( 73) 00:08:54.856 8053.822 - 8106.461: 2.0342% ( 120) 00:08:54.856 8106.461 - 8159.100: 3.5598% ( 207) 00:08:54.856 8159.100 - 8211.740: 5.7488% ( 297) 00:08:54.856 8211.740 - 8264.379: 8.1147% ( 321) 00:08:54.856 8264.379 - 8317.018: 10.8785% ( 375) 00:08:54.856 8317.018 - 8369.658: 13.6792% ( 380) 00:08:54.856 8369.658 - 8422.297: 16.6347% ( 401) 00:08:54.856 8422.297 - 8474.937: 19.4870% ( 387) 00:08:54.856 8474.937 - 8527.576: 22.3467% ( 388) 00:08:54.856 8527.576 - 8580.215: 25.1327% ( 378) 00:08:54.856 8580.215 - 8632.855: 27.7786% ( 359) 00:08:54.856 8632.855 - 8685.494: 30.5646% ( 378) 00:08:54.856 8685.494 - 8738.133: 33.3063% ( 372) 00:08:54.856 8738.133 - 8790.773: 35.9154% ( 354) 00:08:54.856 8790.773 - 8843.412: 38.4802% ( 348) 00:08:54.856 8843.412 - 8896.051: 40.8977% ( 328) 00:08:54.856 8896.051 - 8948.691: 42.8803% ( 269) 00:08:54.856 8948.691 - 9001.330: 44.6197% ( 236) 00:08:54.856 9001.330 - 9053.969: 46.1748% ( 211) 00:08:54.856 9053.969 - 9106.609: 47.7668% ( 216) 00:08:54.856 9106.609 - 9159.248: 49.5504% ( 242) 00:08:54.856 9159.248 - 9211.888: 51.4741% ( 261) 00:08:54.856 9211.888 - 9264.527: 53.7441% ( 308) 00:08:54.856 9264.527 - 9317.166: 56.1026% ( 320) 00:08:54.856 9317.166 - 9369.806: 58.5569% ( 333) 00:08:54.856 9369.806 - 9422.445: 60.9670% ( 327) 00:08:54.856 9422.445 - 9475.084: 63.5318% ( 348) 00:08:54.856 9475.084 - 9527.724: 66.1262% ( 352) 00:08:54.856 9527.724 - 9580.363: 68.7647% ( 358) 00:08:54.856 9580.363 - 9633.002: 71.4402% ( 363) 00:08:54.856 9633.002 - 9685.642: 74.1672% ( 370) 00:08:54.856 9685.642 - 9738.281: 76.9900% ( 383) 00:08:54.856 9738.281 - 9790.920: 79.7096% ( 369) 00:08:54.856 9790.920 - 9843.560: 82.4145% ( 367) 00:08:54.856 9843.560 - 9896.199: 85.0973% ( 364) 00:08:54.856 9896.199 - 9948.839: 87.5369% ( 331) 00:08:54.856 9948.839 - 10001.478: 89.5563% ( 274) 00:08:54.856 10001.478 - 10054.117: 91.0820% ( 207) 00:08:54.856 10054.117 - 10106.757: 92.2907% ( 164) 00:08:54.856 10106.757 - 10159.396: 93.2193% ( 126) 00:08:54.856 10159.396 - 10212.035: 93.9858% ( 104) 00:08:54.856 10212.035 - 10264.675: 94.5755% ( 80) 00:08:54.856 10264.675 - 10317.314: 95.1209% ( 74) 00:08:54.856 10317.314 - 10369.953: 95.5705% ( 61) 00:08:54.856 10369.953 - 10422.593: 95.9832% ( 56) 00:08:54.856 10422.593 - 10475.232: 96.3959% ( 56) 00:08:54.856 10475.232 - 10527.871: 96.6834% ( 39) 00:08:54.856 10527.871 - 10580.511: 96.9634% ( 38) 00:08:54.856 10580.511 - 10633.150: 97.1624% ( 27) 00:08:54.856 10633.150 - 10685.790: 97.3172% ( 21) 00:08:54.856 10685.790 - 10738.429: 97.3909% ( 10) 00:08:54.856 10738.429 - 10791.068: 97.4646% ( 10) 00:08:54.856 10791.068 - 10843.708: 97.5310% ( 9) 00:08:54.856 10843.708 - 10896.347: 97.5899% ( 8) 00:08:54.856 10896.347 - 10948.986: 97.6489% ( 8) 00:08:54.856 10948.986 - 11001.626: 97.7005% ( 7) 00:08:54.856 11001.626 - 11054.265: 97.7742% ( 10) 00:08:54.856 11054.265 - 11106.904: 97.8405% ( 9) 00:08:54.856 11106.904 - 11159.544: 97.9068% ( 9) 00:08:54.856 11159.544 - 11212.183: 97.9363% ( 4) 00:08:54.856 11212.183 - 11264.822: 97.9732% ( 5) 00:08:54.856 11264.822 - 11317.462: 97.9953% ( 3) 00:08:54.856 11317.462 - 11370.101: 98.0100% ( 2) 00:08:54.856 11370.101 - 11422.741: 98.0321% ( 3) 00:08:54.856 11422.741 - 11475.380: 98.0542% ( 3) 00:08:54.856 11475.380 - 11528.019: 98.0911% ( 5) 00:08:54.856 11528.019 - 11580.659: 98.1132% ( 3) 00:08:54.856 11580.659 - 11633.298: 98.1427% ( 4) 00:08:54.856 11633.298 - 11685.937: 98.1648% ( 3) 00:08:54.856 11685.937 - 11738.577: 98.1722% ( 1) 00:08:54.856 11738.577 - 11791.216: 98.1869% ( 2) 00:08:54.856 11791.216 - 11843.855: 98.2017% ( 2) 00:08:54.856 11843.855 - 11896.495: 98.2164% ( 2) 00:08:54.857 11896.495 - 11949.134: 98.2311% ( 2) 00:08:54.857 11949.134 - 12001.773: 98.2459% ( 2) 00:08:54.857 12001.773 - 12054.413: 98.2606% ( 2) 00:08:54.857 12054.413 - 12107.052: 98.2754% ( 2) 00:08:54.857 12107.052 - 12159.692: 98.2901% ( 2) 00:08:54.857 12159.692 - 12212.331: 98.3048% ( 2) 00:08:54.857 12212.331 - 12264.970: 98.3196% ( 2) 00:08:54.857 12264.970 - 12317.610: 98.3343% ( 2) 00:08:54.857 12317.610 - 12370.249: 98.3491% ( 2) 00:08:54.857 12370.249 - 12422.888: 98.3638% ( 2) 00:08:54.857 12422.888 - 12475.528: 98.3785% ( 2) 00:08:54.857 12475.528 - 12528.167: 98.3933% ( 2) 00:08:54.857 12528.167 - 12580.806: 98.4080% ( 2) 00:08:54.857 12580.806 - 12633.446: 98.4228% ( 2) 00:08:54.857 12633.446 - 12686.085: 98.4375% ( 2) 00:08:54.857 12686.085 - 12738.724: 98.4522% ( 2) 00:08:54.857 12738.724 - 12791.364: 98.4670% ( 2) 00:08:54.857 12791.364 - 12844.003: 98.4817% ( 2) 00:08:54.857 12844.003 - 12896.643: 98.4965% ( 2) 00:08:54.857 12896.643 - 12949.282: 98.5112% ( 2) 00:08:54.857 12949.282 - 13001.921: 98.5259% ( 2) 00:08:54.857 13001.921 - 13054.561: 98.5407% ( 2) 00:08:54.857 13054.561 - 13107.200: 98.5481% ( 1) 00:08:54.857 13107.200 - 13159.839: 98.5628% ( 2) 00:08:54.857 13159.839 - 13212.479: 98.5702% ( 1) 00:08:54.857 13212.479 - 13265.118: 98.5849% ( 2) 00:08:54.857 13896.790 - 14002.069: 98.6144% ( 4) 00:08:54.857 14002.069 - 14107.348: 98.6439% ( 4) 00:08:54.857 14107.348 - 14212.627: 98.6733% ( 4) 00:08:54.857 14212.627 - 14317.905: 98.7102% ( 5) 00:08:54.857 14317.905 - 14423.184: 98.7323% ( 3) 00:08:54.857 14423.184 - 14528.463: 98.7618% ( 4) 00:08:54.857 14528.463 - 14633.741: 98.7913% ( 4) 00:08:54.857 14633.741 - 14739.020: 98.8208% ( 4) 00:08:54.857 14739.020 - 14844.299: 98.8576% ( 5) 00:08:54.857 14844.299 - 14949.578: 98.8945% ( 5) 00:08:54.857 14949.578 - 15054.856: 98.9166% ( 3) 00:08:54.857 15054.856 - 15160.135: 98.9460% ( 4) 00:08:54.857 15160.135 - 15265.414: 98.9829% ( 5) 00:08:54.857 15265.414 - 15370.692: 99.0198% ( 5) 00:08:54.857 15370.692 - 15475.971: 99.0492% ( 4) 00:08:54.857 15475.971 - 15581.250: 99.0566% ( 1) 00:08:54.857 34952.533 - 35163.091: 99.1008% ( 6) 00:08:54.857 35163.091 - 35373.648: 99.1524% ( 7) 00:08:54.857 35373.648 - 35584.206: 99.1966% ( 6) 00:08:54.857 35584.206 - 35794.763: 99.2482% ( 7) 00:08:54.857 35794.763 - 36005.320: 99.2998% ( 7) 00:08:54.857 36005.320 - 36215.878: 99.3440% ( 6) 00:08:54.857 36215.878 - 36426.435: 99.3956% ( 7) 00:08:54.857 36426.435 - 36636.993: 99.4472% ( 7) 00:08:54.857 36636.993 - 36847.550: 99.4915% ( 6) 00:08:54.857 36847.550 - 37058.108: 99.5283% ( 5) 00:08:54.857 43795.945 - 44006.503: 99.5430% ( 2) 00:08:54.857 44006.503 - 44217.060: 99.5799% ( 5) 00:08:54.857 44217.060 - 44427.618: 99.6241% ( 6) 00:08:54.857 44427.618 - 44638.175: 99.6610% ( 5) 00:08:54.857 44638.175 - 44848.733: 99.7126% ( 7) 00:08:54.857 44848.733 - 45059.290: 99.7568% ( 6) 00:08:54.857 45059.290 - 45269.847: 99.8010% ( 6) 00:08:54.857 45269.847 - 45480.405: 99.8452% ( 6) 00:08:54.857 45480.405 - 45690.962: 99.8894% ( 6) 00:08:54.857 45690.962 - 45901.520: 99.9263% ( 5) 00:08:54.857 45901.520 - 46112.077: 99.9705% ( 6) 00:08:54.857 46112.077 - 46322.635: 100.0000% ( 4) 00:08:54.857 00:08:54.857 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:54.857 ============================================================================== 00:08:54.857 Range in us Cumulative IO count 00:08:54.857 7737.986 - 7790.625: 0.0369% ( 5) 00:08:54.857 7790.625 - 7843.264: 0.1032% ( 9) 00:08:54.857 7843.264 - 7895.904: 0.1843% ( 11) 00:08:54.857 7895.904 - 7948.543: 0.2874% ( 14) 00:08:54.857 7948.543 - 8001.182: 0.5675% ( 38) 00:08:54.857 8001.182 - 8053.822: 1.0687% ( 68) 00:08:54.857 8053.822 - 8106.461: 1.9605% ( 121) 00:08:54.857 8106.461 - 8159.100: 3.3314% ( 186) 00:08:54.857 8159.100 - 8211.740: 5.4688% ( 290) 00:08:54.857 8211.740 - 8264.379: 7.8420% ( 322) 00:08:54.857 8264.379 - 8317.018: 10.5542% ( 368) 00:08:54.857 8317.018 - 8369.658: 13.3844% ( 384) 00:08:54.857 8369.658 - 8422.297: 16.3031% ( 396) 00:08:54.857 8422.297 - 8474.937: 19.0964% ( 379) 00:08:54.857 8474.937 - 8527.576: 21.9340% ( 385) 00:08:54.857 8527.576 - 8580.215: 24.7420% ( 381) 00:08:54.857 8580.215 - 8632.855: 27.5133% ( 376) 00:08:54.857 8632.855 - 8685.494: 30.1739% ( 361) 00:08:54.857 8685.494 - 8738.133: 32.8936% ( 369) 00:08:54.857 8738.133 - 8790.773: 35.5985% ( 367) 00:08:54.857 8790.773 - 8843.412: 38.1560% ( 347) 00:08:54.857 8843.412 - 8896.051: 40.5734% ( 328) 00:08:54.857 8896.051 - 8948.691: 42.7624% ( 297) 00:08:54.857 8948.691 - 9001.330: 44.5239% ( 239) 00:08:54.857 9001.330 - 9053.969: 46.0643% ( 209) 00:08:54.857 9053.969 - 9106.609: 47.6562% ( 216) 00:08:54.857 9106.609 - 9159.248: 49.6020% ( 264) 00:08:54.857 9159.248 - 9211.888: 51.6583% ( 279) 00:08:54.857 9211.888 - 9264.527: 53.7367% ( 282) 00:08:54.857 9264.527 - 9317.166: 56.0068% ( 308) 00:08:54.857 9317.166 - 9369.806: 58.5348% ( 343) 00:08:54.857 9369.806 - 9422.445: 61.1144% ( 350) 00:08:54.857 9422.445 - 9475.084: 63.6424% ( 343) 00:08:54.857 9475.084 - 9527.724: 66.1557% ( 341) 00:08:54.857 9527.724 - 9580.363: 68.8458% ( 365) 00:08:54.857 9580.363 - 9633.002: 71.6686% ( 383) 00:08:54.857 9633.002 - 9685.642: 74.4104% ( 372) 00:08:54.857 9685.642 - 9738.281: 77.1521% ( 372) 00:08:54.857 9738.281 - 9790.920: 79.9012% ( 373) 00:08:54.857 9790.920 - 9843.560: 82.7167% ( 382) 00:08:54.857 9843.560 - 9896.199: 85.2447% ( 343) 00:08:54.857 9896.199 - 9948.839: 87.6621% ( 328) 00:08:54.857 9948.839 - 10001.478: 89.6669% ( 272) 00:08:54.857 10001.478 - 10054.117: 91.3547% ( 229) 00:08:54.857 10054.117 - 10106.757: 92.5855% ( 167) 00:08:54.857 10106.757 - 10159.396: 93.6468% ( 144) 00:08:54.857 10159.396 - 10212.035: 94.4649% ( 111) 00:08:54.857 10212.035 - 10264.675: 95.1135% ( 88) 00:08:54.857 10264.675 - 10317.314: 95.6736% ( 76) 00:08:54.857 10317.314 - 10369.953: 96.1601% ( 66) 00:08:54.857 10369.953 - 10422.593: 96.6023% ( 60) 00:08:54.857 10422.593 - 10475.232: 96.9192% ( 43) 00:08:54.857 10475.232 - 10527.871: 97.1403% ( 30) 00:08:54.857 10527.871 - 10580.511: 97.3688% ( 31) 00:08:54.857 10580.511 - 10633.150: 97.5383% ( 23) 00:08:54.857 10633.150 - 10685.790: 97.6857% ( 20) 00:08:54.857 10685.790 - 10738.429: 97.7521% ( 9) 00:08:54.857 10738.429 - 10791.068: 97.7889% ( 5) 00:08:54.857 10791.068 - 10843.708: 97.8258% ( 5) 00:08:54.857 10843.708 - 10896.347: 97.8700% ( 6) 00:08:54.857 10896.347 - 10948.986: 97.9068% ( 5) 00:08:54.857 10948.986 - 11001.626: 97.9511% ( 6) 00:08:54.857 11001.626 - 11054.265: 97.9805% ( 4) 00:08:54.857 11054.265 - 11106.904: 98.0027% ( 3) 00:08:54.857 11106.904 - 11159.544: 98.0174% ( 2) 00:08:54.857 11159.544 - 11212.183: 98.0616% ( 6) 00:08:54.857 11212.183 - 11264.822: 98.0985% ( 5) 00:08:54.857 11264.822 - 11317.462: 98.1353% ( 5) 00:08:54.857 11317.462 - 11370.101: 98.1648% ( 4) 00:08:54.857 11370.101 - 11422.741: 98.1943% ( 4) 00:08:54.857 11422.741 - 11475.380: 98.2090% ( 2) 00:08:54.857 11475.380 - 11528.019: 98.2311% ( 3) 00:08:54.857 11528.019 - 11580.659: 98.2385% ( 1) 00:08:54.857 11580.659 - 11633.298: 98.2532% ( 2) 00:08:54.857 11633.298 - 11685.937: 98.2754% ( 3) 00:08:54.857 11685.937 - 11738.577: 98.2901% ( 2) 00:08:54.857 11738.577 - 11791.216: 98.3048% ( 2) 00:08:54.857 11791.216 - 11843.855: 98.3196% ( 2) 00:08:54.857 11843.855 - 11896.495: 98.3269% ( 1) 00:08:54.857 11896.495 - 11949.134: 98.3417% ( 2) 00:08:54.857 11949.134 - 12001.773: 98.3564% ( 2) 00:08:54.857 12001.773 - 12054.413: 98.3712% ( 2) 00:08:54.857 12054.413 - 12107.052: 98.3859% ( 2) 00:08:54.857 12107.052 - 12159.692: 98.4006% ( 2) 00:08:54.857 12159.692 - 12212.331: 98.4154% ( 2) 00:08:54.857 12212.331 - 12264.970: 98.4375% ( 3) 00:08:54.857 12264.970 - 12317.610: 98.4449% ( 1) 00:08:54.857 12317.610 - 12370.249: 98.4596% ( 2) 00:08:54.857 12370.249 - 12422.888: 98.4744% ( 2) 00:08:54.857 12422.888 - 12475.528: 98.4891% ( 2) 00:08:54.857 12475.528 - 12528.167: 98.5038% ( 2) 00:08:54.857 12528.167 - 12580.806: 98.5186% ( 2) 00:08:54.857 12580.806 - 12633.446: 98.5333% ( 2) 00:08:54.857 12633.446 - 12686.085: 98.5481% ( 2) 00:08:54.857 12686.085 - 12738.724: 98.5628% ( 2) 00:08:54.857 12738.724 - 12791.364: 98.5775% ( 2) 00:08:54.857 12791.364 - 12844.003: 98.5849% ( 1) 00:08:54.857 13212.479 - 13265.118: 98.5996% ( 2) 00:08:54.857 13265.118 - 13317.757: 98.6144% ( 2) 00:08:54.857 13317.757 - 13370.397: 98.6291% ( 2) 00:08:54.857 13370.397 - 13423.036: 98.6439% ( 2) 00:08:54.857 13423.036 - 13475.676: 98.6733% ( 4) 00:08:54.857 13475.676 - 13580.954: 98.7028% ( 4) 00:08:54.857 13580.954 - 13686.233: 98.7323% ( 4) 00:08:54.857 13686.233 - 13791.512: 98.7692% ( 5) 00:08:54.857 13791.512 - 13896.790: 98.7986% ( 4) 00:08:54.857 13896.790 - 14002.069: 98.8355% ( 5) 00:08:54.857 14002.069 - 14107.348: 98.8723% ( 5) 00:08:54.857 14107.348 - 14212.627: 98.9018% ( 4) 00:08:54.857 14212.627 - 14317.905: 98.9313% ( 4) 00:08:54.857 14317.905 - 14423.184: 98.9608% ( 4) 00:08:54.857 14423.184 - 14528.463: 98.9976% ( 5) 00:08:54.857 14528.463 - 14633.741: 99.0271% ( 4) 00:08:54.857 14633.741 - 14739.020: 99.0566% ( 4) 00:08:54.857 34110.304 - 34320.861: 99.0713% ( 2) 00:08:54.857 34320.861 - 34531.418: 99.1156% ( 6) 00:08:54.857 34531.418 - 34741.976: 99.1672% ( 7) 00:08:54.857 34741.976 - 34952.533: 99.2114% ( 6) 00:08:54.857 34952.533 - 35163.091: 99.2556% ( 6) 00:08:54.857 35163.091 - 35373.648: 99.3072% ( 7) 00:08:54.858 35373.648 - 35584.206: 99.3514% ( 6) 00:08:54.858 35584.206 - 35794.763: 99.4030% ( 7) 00:08:54.858 35794.763 - 36005.320: 99.4472% ( 6) 00:08:54.858 36005.320 - 36215.878: 99.4988% ( 7) 00:08:54.858 36215.878 - 36426.435: 99.5283% ( 4) 00:08:54.858 42743.158 - 42953.716: 99.5430% ( 2) 00:08:54.858 42953.716 - 43164.273: 99.5799% ( 5) 00:08:54.858 43164.273 - 43374.831: 99.6241% ( 6) 00:08:54.858 43374.831 - 43585.388: 99.6610% ( 5) 00:08:54.858 43585.388 - 43795.945: 99.6978% ( 5) 00:08:54.858 43795.945 - 44006.503: 99.7494% ( 7) 00:08:54.858 44006.503 - 44217.060: 99.7936% ( 6) 00:08:54.858 44217.060 - 44427.618: 99.8379% ( 6) 00:08:54.858 44427.618 - 44638.175: 99.8821% ( 6) 00:08:54.858 44638.175 - 44848.733: 99.9263% ( 6) 00:08:54.858 44848.733 - 45059.290: 99.9705% ( 6) 00:08:54.858 45059.290 - 45269.847: 100.0000% ( 4) 00:08:54.858 00:08:54.858 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:54.858 ============================================================================== 00:08:54.858 Range in us Cumulative IO count 00:08:54.858 7790.625 - 7843.264: 0.0811% ( 11) 00:08:54.858 7843.264 - 7895.904: 0.1253% ( 6) 00:08:54.858 7895.904 - 7948.543: 0.1843% ( 8) 00:08:54.858 7948.543 - 8001.182: 0.4348% ( 34) 00:08:54.858 8001.182 - 8053.822: 0.9802% ( 74) 00:08:54.858 8053.822 - 8106.461: 1.8942% ( 124) 00:08:54.858 8106.461 - 8159.100: 3.0218% ( 153) 00:08:54.858 8159.100 - 8211.740: 5.1592% ( 290) 00:08:54.858 8211.740 - 8264.379: 7.4366% ( 309) 00:08:54.858 8264.379 - 8317.018: 10.2521% ( 382) 00:08:54.858 8317.018 - 8369.658: 13.2002% ( 400) 00:08:54.858 8369.658 - 8422.297: 16.0230% ( 383) 00:08:54.858 8422.297 - 8474.937: 18.8384% ( 382) 00:08:54.858 8474.937 - 8527.576: 21.7571% ( 396) 00:08:54.858 8527.576 - 8580.215: 24.6462% ( 392) 00:08:54.858 8580.215 - 8632.855: 27.4322% ( 378) 00:08:54.858 8632.855 - 8685.494: 30.2403% ( 381) 00:08:54.858 8685.494 - 8738.133: 33.0262% ( 378) 00:08:54.858 8738.133 - 8790.773: 35.7754% ( 373) 00:08:54.858 8790.773 - 8843.412: 38.3771% ( 353) 00:08:54.858 8843.412 - 8896.051: 40.7798% ( 326) 00:08:54.858 8896.051 - 8948.691: 43.0056% ( 302) 00:08:54.858 8948.691 - 9001.330: 44.7892% ( 242) 00:08:54.858 9001.330 - 9053.969: 46.3886% ( 217) 00:08:54.858 9053.969 - 9106.609: 47.9216% ( 208) 00:08:54.858 9106.609 - 9159.248: 49.5430% ( 220) 00:08:54.858 9159.248 - 9211.888: 51.5551% ( 273) 00:08:54.858 9211.888 - 9264.527: 53.8473% ( 311) 00:08:54.858 9264.527 - 9317.166: 56.2279% ( 323) 00:08:54.858 9317.166 - 9369.806: 58.6159% ( 324) 00:08:54.858 9369.806 - 9422.445: 61.0554% ( 331) 00:08:54.858 9422.445 - 9475.084: 63.5982% ( 345) 00:08:54.858 9475.084 - 9527.724: 66.2957% ( 366) 00:08:54.858 9527.724 - 9580.363: 68.8827% ( 351) 00:08:54.858 9580.363 - 9633.002: 71.6613% ( 377) 00:08:54.858 9633.002 - 9685.642: 74.4693% ( 381) 00:08:54.858 9685.642 - 9738.281: 77.2553% ( 378) 00:08:54.858 9738.281 - 9790.920: 80.0781% ( 383) 00:08:54.858 9790.920 - 9843.560: 82.8199% ( 372) 00:08:54.858 9843.560 - 9896.199: 85.4142% ( 352) 00:08:54.858 9896.199 - 9948.839: 87.8096% ( 325) 00:08:54.858 9948.839 - 10001.478: 89.7185% ( 259) 00:08:54.858 10001.478 - 10054.117: 91.3178% ( 217) 00:08:54.858 10054.117 - 10106.757: 92.6002% ( 174) 00:08:54.858 10106.757 - 10159.396: 93.5584% ( 130) 00:08:54.858 10159.396 - 10212.035: 94.2659% ( 96) 00:08:54.858 10212.035 - 10264.675: 94.8998% ( 86) 00:08:54.858 10264.675 - 10317.314: 95.4820% ( 79) 00:08:54.858 10317.314 - 10369.953: 96.0127% ( 72) 00:08:54.858 10369.953 - 10422.593: 96.4328% ( 57) 00:08:54.858 10422.593 - 10475.232: 96.7939% ( 49) 00:08:54.858 10475.232 - 10527.871: 97.1256% ( 45) 00:08:54.858 10527.871 - 10580.511: 97.3246% ( 27) 00:08:54.858 10580.511 - 10633.150: 97.4425% ( 16) 00:08:54.858 10633.150 - 10685.790: 97.5383% ( 13) 00:08:54.858 10685.790 - 10738.429: 97.6047% ( 9) 00:08:54.858 10738.429 - 10791.068: 97.6636% ( 8) 00:08:54.858 10791.068 - 10843.708: 97.7447% ( 11) 00:08:54.858 10843.708 - 10896.347: 97.8037% ( 8) 00:08:54.858 10896.347 - 10948.986: 97.8626% ( 8) 00:08:54.858 10948.986 - 11001.626: 97.9290% ( 9) 00:08:54.858 11001.626 - 11054.265: 97.9584% ( 4) 00:08:54.858 11054.265 - 11106.904: 97.9805% ( 3) 00:08:54.858 11106.904 - 11159.544: 98.0248% ( 6) 00:08:54.858 11159.544 - 11212.183: 98.0616% ( 5) 00:08:54.858 11212.183 - 11264.822: 98.0837% ( 3) 00:08:54.858 11264.822 - 11317.462: 98.1132% ( 4) 00:08:54.858 11317.462 - 11370.101: 98.1574% ( 6) 00:08:54.858 11370.101 - 11422.741: 98.1869% ( 4) 00:08:54.858 11422.741 - 11475.380: 98.2238% ( 5) 00:08:54.858 11475.380 - 11528.019: 98.2532% ( 4) 00:08:54.858 11528.019 - 11580.659: 98.2901% ( 5) 00:08:54.858 11580.659 - 11633.298: 98.3196% ( 4) 00:08:54.858 11633.298 - 11685.937: 98.3343% ( 2) 00:08:54.858 11685.937 - 11738.577: 98.3491% ( 2) 00:08:54.858 11738.577 - 11791.216: 98.3638% ( 2) 00:08:54.858 11791.216 - 11843.855: 98.3712% ( 1) 00:08:54.858 11843.855 - 11896.495: 98.3859% ( 2) 00:08:54.858 11896.495 - 11949.134: 98.4006% ( 2) 00:08:54.858 11949.134 - 12001.773: 98.4154% ( 2) 00:08:54.858 12001.773 - 12054.413: 98.4301% ( 2) 00:08:54.858 12054.413 - 12107.052: 98.4522% ( 3) 00:08:54.858 12107.052 - 12159.692: 98.4596% ( 1) 00:08:54.858 12159.692 - 12212.331: 98.4744% ( 2) 00:08:54.858 12212.331 - 12264.970: 98.4891% ( 2) 00:08:54.858 12264.970 - 12317.610: 98.5038% ( 2) 00:08:54.858 12317.610 - 12370.249: 98.5186% ( 2) 00:08:54.858 12370.249 - 12422.888: 98.5333% ( 2) 00:08:54.858 12422.888 - 12475.528: 98.5481% ( 2) 00:08:54.858 12475.528 - 12528.167: 98.5628% ( 2) 00:08:54.858 12528.167 - 12580.806: 98.5775% ( 2) 00:08:54.858 12580.806 - 12633.446: 98.5849% ( 1) 00:08:54.858 12844.003 - 12896.643: 98.5996% ( 2) 00:08:54.858 12896.643 - 12949.282: 98.6218% ( 3) 00:08:54.858 12949.282 - 13001.921: 98.6365% ( 2) 00:08:54.858 13001.921 - 13054.561: 98.6512% ( 2) 00:08:54.858 13054.561 - 13107.200: 98.6733% ( 3) 00:08:54.858 13107.200 - 13159.839: 98.6881% ( 2) 00:08:54.858 13159.839 - 13212.479: 98.7176% ( 4) 00:08:54.858 13212.479 - 13265.118: 98.7323% ( 2) 00:08:54.858 13265.118 - 13317.757: 98.7471% ( 2) 00:08:54.858 13317.757 - 13370.397: 98.7692% ( 3) 00:08:54.858 13370.397 - 13423.036: 98.7839% ( 2) 00:08:54.858 13423.036 - 13475.676: 98.7986% ( 2) 00:08:54.858 13475.676 - 13580.954: 98.8355% ( 5) 00:08:54.858 13580.954 - 13686.233: 98.8650% ( 4) 00:08:54.858 13686.233 - 13791.512: 98.8945% ( 4) 00:08:54.858 13791.512 - 13896.790: 98.9239% ( 4) 00:08:54.858 13896.790 - 14002.069: 98.9608% ( 5) 00:08:54.858 14002.069 - 14107.348: 98.9976% ( 5) 00:08:54.858 14107.348 - 14212.627: 99.0271% ( 4) 00:08:54.858 14212.627 - 14317.905: 99.0566% ( 4) 00:08:54.858 32636.402 - 32846.959: 99.0640% ( 1) 00:08:54.858 32846.959 - 33057.516: 99.1082% ( 6) 00:08:54.858 33057.516 - 33268.074: 99.1598% ( 7) 00:08:54.858 33268.074 - 33478.631: 99.2040% ( 6) 00:08:54.858 33478.631 - 33689.189: 99.2482% ( 6) 00:08:54.858 33689.189 - 33899.746: 99.2998% ( 7) 00:08:54.858 33899.746 - 34110.304: 99.3514% ( 7) 00:08:54.858 34110.304 - 34320.861: 99.4030% ( 7) 00:08:54.858 34320.861 - 34531.418: 99.4472% ( 6) 00:08:54.858 34531.418 - 34741.976: 99.4915% ( 6) 00:08:54.858 34741.976 - 34952.533: 99.5283% ( 5) 00:08:54.858 41058.699 - 41269.256: 99.5430% ( 2) 00:08:54.858 41269.256 - 41479.814: 99.5799% ( 5) 00:08:54.858 41479.814 - 41690.371: 99.6241% ( 6) 00:08:54.858 41690.371 - 41900.929: 99.6610% ( 5) 00:08:54.858 41900.929 - 42111.486: 99.7126% ( 7) 00:08:54.858 42111.486 - 42322.043: 99.7568% ( 6) 00:08:54.858 42322.043 - 42532.601: 99.8010% ( 6) 00:08:54.858 42532.601 - 42743.158: 99.8379% ( 5) 00:08:54.858 42743.158 - 42953.716: 99.8894% ( 7) 00:08:54.858 42953.716 - 43164.273: 99.9263% ( 5) 00:08:54.858 43164.273 - 43374.831: 99.9705% ( 6) 00:08:54.858 43374.831 - 43585.388: 100.0000% ( 4) 00:08:54.858 00:08:54.858 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:54.858 ============================================================================== 00:08:54.858 Range in us Cumulative IO count 00:08:54.858 7843.264 - 7895.904: 0.0442% ( 6) 00:08:54.858 7895.904 - 7948.543: 0.2137% ( 23) 00:08:54.858 7948.543 - 8001.182: 0.3317% ( 16) 00:08:54.858 8001.182 - 8053.822: 0.7960% ( 63) 00:08:54.858 8053.822 - 8106.461: 1.7320% ( 127) 00:08:54.858 8106.461 - 8159.100: 3.1397% ( 191) 00:08:54.858 8159.100 - 8211.740: 5.0560% ( 260) 00:08:54.858 8211.740 - 8264.379: 7.4956% ( 331) 00:08:54.858 8264.379 - 8317.018: 10.1636% ( 362) 00:08:54.858 8317.018 - 8369.658: 13.1633% ( 407) 00:08:54.858 8369.658 - 8422.297: 16.0304% ( 389) 00:08:54.858 8422.297 - 8474.937: 18.8827% ( 387) 00:08:54.858 8474.937 - 8527.576: 21.8676% ( 405) 00:08:54.858 8527.576 - 8580.215: 24.7347% ( 389) 00:08:54.858 8580.215 - 8632.855: 27.5722% ( 385) 00:08:54.858 8632.855 - 8685.494: 30.3950% ( 383) 00:08:54.858 8685.494 - 8738.133: 33.1589% ( 375) 00:08:54.858 8738.133 - 8790.773: 35.8712% ( 368) 00:08:54.858 8790.773 - 8843.412: 38.4655% ( 352) 00:08:54.858 8843.412 - 8896.051: 40.7724% ( 313) 00:08:54.858 8896.051 - 8948.691: 43.0056% ( 303) 00:08:54.858 8948.691 - 9001.330: 44.8113% ( 245) 00:08:54.858 9001.330 - 9053.969: 46.3665% ( 211) 00:08:54.858 9053.969 - 9106.609: 47.7594% ( 189) 00:08:54.858 9106.609 - 9159.248: 49.4030% ( 223) 00:08:54.858 9159.248 - 9211.888: 51.4593% ( 279) 00:08:54.858 9211.888 - 9264.527: 53.7294% ( 308) 00:08:54.859 9264.527 - 9317.166: 56.1394% ( 327) 00:08:54.859 9317.166 - 9369.806: 58.4906% ( 319) 00:08:54.859 9369.806 - 9422.445: 60.9965% ( 340) 00:08:54.859 9422.445 - 9475.084: 63.4876% ( 338) 00:08:54.859 9475.084 - 9527.724: 66.0672% ( 350) 00:08:54.859 9527.724 - 9580.363: 68.6910% ( 356) 00:08:54.859 9580.363 - 9633.002: 71.5065% ( 382) 00:08:54.859 9633.002 - 9685.642: 74.3293% ( 383) 00:08:54.859 9685.642 - 9738.281: 77.2037% ( 390) 00:08:54.859 9738.281 - 9790.920: 80.1076% ( 394) 00:08:54.859 9790.920 - 9843.560: 82.9083% ( 380) 00:08:54.859 9843.560 - 9896.199: 85.4658% ( 347) 00:08:54.859 9896.199 - 9948.839: 87.7285% ( 307) 00:08:54.859 9948.839 - 10001.478: 89.7774% ( 278) 00:08:54.859 10001.478 - 10054.117: 91.4284% ( 224) 00:08:54.859 10054.117 - 10106.757: 92.6666% ( 168) 00:08:54.859 10106.757 - 10159.396: 93.4994% ( 113) 00:08:54.859 10159.396 - 10212.035: 94.2070% ( 96) 00:08:54.859 10212.035 - 10264.675: 94.8555% ( 88) 00:08:54.859 10264.675 - 10317.314: 95.4452% ( 80) 00:08:54.859 10317.314 - 10369.953: 95.9169% ( 64) 00:08:54.859 10369.953 - 10422.593: 96.4033% ( 66) 00:08:54.859 10422.593 - 10475.232: 96.8160% ( 56) 00:08:54.859 10475.232 - 10527.871: 97.1330% ( 43) 00:08:54.859 10527.871 - 10580.511: 97.3320% ( 27) 00:08:54.859 10580.511 - 10633.150: 97.4794% ( 20) 00:08:54.859 10633.150 - 10685.790: 97.5899% ( 15) 00:08:54.859 10685.790 - 10738.429: 97.6636% ( 10) 00:08:54.859 10738.429 - 10791.068: 97.7300% ( 9) 00:08:54.859 10791.068 - 10843.708: 97.7889% ( 8) 00:08:54.859 10843.708 - 10896.347: 97.8258% ( 5) 00:08:54.859 10896.347 - 10948.986: 97.8552% ( 4) 00:08:54.859 10948.986 - 11001.626: 97.8847% ( 4) 00:08:54.859 11001.626 - 11054.265: 97.9216% ( 5) 00:08:54.859 11054.265 - 11106.904: 97.9511% ( 4) 00:08:54.859 11106.904 - 11159.544: 97.9879% ( 5) 00:08:54.859 11159.544 - 11212.183: 98.0248% ( 5) 00:08:54.859 11212.183 - 11264.822: 98.0542% ( 4) 00:08:54.859 11264.822 - 11317.462: 98.0911% ( 5) 00:08:54.859 11317.462 - 11370.101: 98.1206% ( 4) 00:08:54.859 11370.101 - 11422.741: 98.1574% ( 5) 00:08:54.859 11422.741 - 11475.380: 98.1869% ( 4) 00:08:54.859 11475.380 - 11528.019: 98.2238% ( 5) 00:08:54.859 11528.019 - 11580.659: 98.2606% ( 5) 00:08:54.859 11580.659 - 11633.298: 98.2901% ( 4) 00:08:54.859 11633.298 - 11685.937: 98.3269% ( 5) 00:08:54.859 11685.937 - 11738.577: 98.3712% ( 6) 00:08:54.859 11738.577 - 11791.216: 98.4006% ( 4) 00:08:54.859 11791.216 - 11843.855: 98.4375% ( 5) 00:08:54.859 11843.855 - 11896.495: 98.4744% ( 5) 00:08:54.859 11896.495 - 11949.134: 98.4817% ( 1) 00:08:54.859 11949.134 - 12001.773: 98.4891% ( 1) 00:08:54.859 12001.773 - 12054.413: 98.5112% ( 3) 00:08:54.859 12054.413 - 12107.052: 98.5186% ( 1) 00:08:54.859 12107.052 - 12159.692: 98.5259% ( 1) 00:08:54.859 12159.692 - 12212.331: 98.5407% ( 2) 00:08:54.859 12212.331 - 12264.970: 98.5554% ( 2) 00:08:54.859 12264.970 - 12317.610: 98.5702% ( 2) 00:08:54.859 12317.610 - 12370.249: 98.5849% ( 2) 00:08:54.859 12422.888 - 12475.528: 98.6070% ( 3) 00:08:54.859 12475.528 - 12528.167: 98.6144% ( 1) 00:08:54.859 12528.167 - 12580.806: 98.6218% ( 1) 00:08:54.859 12580.806 - 12633.446: 98.6365% ( 2) 00:08:54.859 12633.446 - 12686.085: 98.6586% ( 3) 00:08:54.859 12686.085 - 12738.724: 98.6733% ( 2) 00:08:54.859 12738.724 - 12791.364: 98.6807% ( 1) 00:08:54.859 12791.364 - 12844.003: 98.6955% ( 2) 00:08:54.859 12844.003 - 12896.643: 98.7102% ( 2) 00:08:54.859 12896.643 - 12949.282: 98.7323% ( 3) 00:08:54.859 12949.282 - 13001.921: 98.7471% ( 2) 00:08:54.859 13001.921 - 13054.561: 98.7692% ( 3) 00:08:54.859 13054.561 - 13107.200: 98.7839% ( 2) 00:08:54.859 13107.200 - 13159.839: 98.7986% ( 2) 00:08:54.859 13159.839 - 13212.479: 98.8208% ( 3) 00:08:54.859 13212.479 - 13265.118: 98.8355% ( 2) 00:08:54.859 13265.118 - 13317.757: 98.8502% ( 2) 00:08:54.859 13317.757 - 13370.397: 98.8723% ( 3) 00:08:54.859 13370.397 - 13423.036: 98.8871% ( 2) 00:08:54.859 13423.036 - 13475.676: 98.9092% ( 3) 00:08:54.859 13475.676 - 13580.954: 98.9387% ( 4) 00:08:54.859 13580.954 - 13686.233: 98.9755% ( 5) 00:08:54.859 13686.233 - 13791.512: 99.0050% ( 4) 00:08:54.859 13791.512 - 13896.790: 99.0345% ( 4) 00:08:54.859 13896.790 - 14002.069: 99.0566% ( 3) 00:08:54.859 31162.500 - 31373.057: 99.0787% ( 3) 00:08:54.859 31373.057 - 31583.614: 99.1229% ( 6) 00:08:54.859 31583.614 - 31794.172: 99.1745% ( 7) 00:08:54.859 31794.172 - 32004.729: 99.2188% ( 6) 00:08:54.859 32004.729 - 32215.287: 99.2556% ( 5) 00:08:54.859 32215.287 - 32425.844: 99.2998% ( 6) 00:08:54.859 32425.844 - 32636.402: 99.3514% ( 7) 00:08:54.859 32636.402 - 32846.959: 99.4030% ( 7) 00:08:54.859 32846.959 - 33057.516: 99.4472% ( 6) 00:08:54.859 33057.516 - 33268.074: 99.4915% ( 6) 00:08:54.859 33268.074 - 33478.631: 99.5283% ( 5) 00:08:54.859 39584.797 - 39795.354: 99.5725% ( 6) 00:08:54.859 39795.354 - 40005.912: 99.6167% ( 6) 00:08:54.859 40005.912 - 40216.469: 99.6462% ( 4) 00:08:54.859 40216.469 - 40427.027: 99.6904% ( 6) 00:08:54.859 40427.027 - 40637.584: 99.7273% ( 5) 00:08:54.859 40637.584 - 40848.141: 99.7789% ( 7) 00:08:54.859 40848.141 - 41058.699: 99.8231% ( 6) 00:08:54.859 41058.699 - 41269.256: 99.8673% ( 6) 00:08:54.859 41269.256 - 41479.814: 99.9116% ( 6) 00:08:54.859 41479.814 - 41690.371: 99.9558% ( 6) 00:08:54.859 41690.371 - 41900.929: 100.0000% ( 6) 00:08:54.859 00:08:54.859 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:54.859 ============================================================================== 00:08:54.859 Range in us Cumulative IO count 00:08:54.859 7843.264 - 7895.904: 0.0295% ( 4) 00:08:54.859 7895.904 - 7948.543: 0.1548% ( 17) 00:08:54.859 7948.543 - 8001.182: 0.3538% ( 27) 00:08:54.859 8001.182 - 8053.822: 0.7591% ( 55) 00:08:54.859 8053.822 - 8106.461: 1.6657% ( 123) 00:08:54.859 8106.461 - 8159.100: 3.0660% ( 190) 00:08:54.859 8159.100 - 8211.740: 5.1592% ( 284) 00:08:54.859 8211.740 - 8264.379: 7.5988% ( 331) 00:08:54.859 8264.379 - 8317.018: 10.2815% ( 364) 00:08:54.859 8317.018 - 8369.658: 13.1633% ( 391) 00:08:54.859 8369.658 - 8422.297: 16.0598% ( 393) 00:08:54.859 8422.297 - 8474.937: 18.9564% ( 393) 00:08:54.859 8474.937 - 8527.576: 21.8308% ( 390) 00:08:54.859 8527.576 - 8580.215: 24.6610% ( 384) 00:08:54.859 8580.215 - 8632.855: 27.5354% ( 390) 00:08:54.859 8632.855 - 8685.494: 30.3803% ( 386) 00:08:54.859 8685.494 - 8738.133: 33.2473% ( 389) 00:08:54.859 8738.133 - 8790.773: 35.9301% ( 364) 00:08:54.859 8790.773 - 8843.412: 38.4655% ( 344) 00:08:54.859 8843.412 - 8896.051: 40.8240% ( 320) 00:08:54.859 8896.051 - 8948.691: 43.1235% ( 312) 00:08:54.859 8948.691 - 9001.330: 44.8703% ( 237) 00:08:54.859 9001.330 - 9053.969: 46.3001% ( 194) 00:08:54.859 9053.969 - 9106.609: 47.8700% ( 213) 00:08:54.859 9106.609 - 9159.248: 49.3735% ( 204) 00:08:54.859 9159.248 - 9211.888: 51.3193% ( 264) 00:08:54.859 9211.888 - 9264.527: 53.5451% ( 302) 00:08:54.859 9264.527 - 9317.166: 56.0142% ( 335) 00:08:54.859 9317.166 - 9369.806: 58.5053% ( 338) 00:08:54.859 9369.806 - 9422.445: 61.0112% ( 340) 00:08:54.859 9422.445 - 9475.084: 63.4802% ( 335) 00:08:54.859 9475.084 - 9527.724: 66.0967% ( 355) 00:08:54.859 9527.724 - 9580.363: 68.7942% ( 366) 00:08:54.859 9580.363 - 9633.002: 71.6097% ( 382) 00:08:54.859 9633.002 - 9685.642: 74.5062% ( 393) 00:08:54.859 9685.642 - 9738.281: 77.4396% ( 398) 00:08:54.859 9738.281 - 9790.920: 80.1813% ( 372) 00:08:54.859 9790.920 - 9843.560: 82.8641% ( 364) 00:08:54.859 9843.560 - 9896.199: 85.3774% ( 341) 00:08:54.859 9896.199 - 9948.839: 87.7358% ( 320) 00:08:54.859 9948.839 - 10001.478: 89.8364% ( 285) 00:08:54.859 10001.478 - 10054.117: 91.5094% ( 227) 00:08:54.859 10054.117 - 10106.757: 92.7992% ( 175) 00:08:54.859 10106.757 - 10159.396: 93.6026% ( 109) 00:08:54.859 10159.396 - 10212.035: 94.3249% ( 98) 00:08:54.859 10212.035 - 10264.675: 94.9145% ( 80) 00:08:54.859 10264.675 - 10317.314: 95.4452% ( 72) 00:08:54.860 10317.314 - 10369.953: 95.9463% ( 68) 00:08:54.860 10369.953 - 10422.593: 96.3591% ( 56) 00:08:54.860 10422.593 - 10475.232: 96.7423% ( 52) 00:08:54.860 10475.232 - 10527.871: 97.0371% ( 40) 00:08:54.860 10527.871 - 10580.511: 97.2877% ( 34) 00:08:54.860 10580.511 - 10633.150: 97.4278% ( 19) 00:08:54.860 10633.150 - 10685.790: 97.5015% ( 10) 00:08:54.860 10685.790 - 10738.429: 97.5825% ( 11) 00:08:54.860 10738.429 - 10791.068: 97.6415% ( 8) 00:08:54.860 10791.068 - 10843.708: 97.6784% ( 5) 00:08:54.860 10843.708 - 10896.347: 97.7078% ( 4) 00:08:54.860 10896.347 - 10948.986: 97.7594% ( 7) 00:08:54.860 10948.986 - 11001.626: 97.8110% ( 7) 00:08:54.860 11001.626 - 11054.265: 97.8700% ( 8) 00:08:54.860 11054.265 - 11106.904: 97.9216% ( 7) 00:08:54.860 11106.904 - 11159.544: 97.9584% ( 5) 00:08:54.860 11159.544 - 11212.183: 97.9953% ( 5) 00:08:54.860 11212.183 - 11264.822: 98.0321% ( 5) 00:08:54.860 11264.822 - 11317.462: 98.0690% ( 5) 00:08:54.860 11317.462 - 11370.101: 98.1058% ( 5) 00:08:54.860 11370.101 - 11422.741: 98.1353% ( 4) 00:08:54.860 11422.741 - 11475.380: 98.1722% ( 5) 00:08:54.860 11475.380 - 11528.019: 98.2164% ( 6) 00:08:54.860 11528.019 - 11580.659: 98.2459% ( 4) 00:08:54.860 11580.659 - 11633.298: 98.2680% ( 3) 00:08:54.860 11633.298 - 11685.937: 98.3048% ( 5) 00:08:54.860 11685.937 - 11738.577: 98.3343% ( 4) 00:08:54.860 11738.577 - 11791.216: 98.3712% ( 5) 00:08:54.860 11791.216 - 11843.855: 98.4006% ( 4) 00:08:54.860 11843.855 - 11896.495: 98.4375% ( 5) 00:08:54.860 11896.495 - 11949.134: 98.4744% ( 5) 00:08:54.860 11949.134 - 12001.773: 98.5038% ( 4) 00:08:54.860 12001.773 - 12054.413: 98.5333% ( 4) 00:08:54.860 12054.413 - 12107.052: 98.5775% ( 6) 00:08:54.860 12107.052 - 12159.692: 98.6144% ( 5) 00:08:54.860 12159.692 - 12212.331: 98.6291% ( 2) 00:08:54.860 12212.331 - 12264.970: 98.6512% ( 3) 00:08:54.860 12264.970 - 12317.610: 98.6660% ( 2) 00:08:54.860 12317.610 - 12370.249: 98.6807% ( 2) 00:08:54.860 12370.249 - 12422.888: 98.6881% ( 1) 00:08:54.860 12422.888 - 12475.528: 98.7028% ( 2) 00:08:54.860 12475.528 - 12528.167: 98.7176% ( 2) 00:08:54.860 12528.167 - 12580.806: 98.7323% ( 2) 00:08:54.860 12580.806 - 12633.446: 98.7471% ( 2) 00:08:54.860 12633.446 - 12686.085: 98.7618% ( 2) 00:08:54.860 12686.085 - 12738.724: 98.7765% ( 2) 00:08:54.860 12738.724 - 12791.364: 98.7913% ( 2) 00:08:54.860 12791.364 - 12844.003: 98.8060% ( 2) 00:08:54.860 12844.003 - 12896.643: 98.8208% ( 2) 00:08:54.860 12896.643 - 12949.282: 98.8355% ( 2) 00:08:54.860 12949.282 - 13001.921: 98.8576% ( 3) 00:08:54.860 13001.921 - 13054.561: 98.8650% ( 1) 00:08:54.860 13054.561 - 13107.200: 98.8797% ( 2) 00:08:54.860 13107.200 - 13159.839: 98.9018% ( 3) 00:08:54.860 13159.839 - 13212.479: 98.9166% ( 2) 00:08:54.860 13212.479 - 13265.118: 98.9313% ( 2) 00:08:54.860 13265.118 - 13317.757: 98.9460% ( 2) 00:08:54.860 13317.757 - 13370.397: 98.9608% ( 2) 00:08:54.860 13370.397 - 13423.036: 98.9755% ( 2) 00:08:54.860 13423.036 - 13475.676: 98.9903% ( 2) 00:08:54.860 13475.676 - 13580.954: 99.0124% ( 3) 00:08:54.860 13580.954 - 13686.233: 99.0345% ( 3) 00:08:54.860 13686.233 - 13791.512: 99.0566% ( 3) 00:08:54.860 29478.040 - 29688.598: 99.0861% ( 4) 00:08:54.860 29688.598 - 29899.155: 99.1303% ( 6) 00:08:54.860 29899.155 - 30109.712: 99.1745% ( 6) 00:08:54.860 30109.712 - 30320.270: 99.2114% ( 5) 00:08:54.860 30320.270 - 30530.827: 99.2556% ( 6) 00:08:54.860 30530.827 - 30741.385: 99.2998% ( 6) 00:08:54.860 30741.385 - 30951.942: 99.3367% ( 5) 00:08:54.860 30951.942 - 31162.500: 99.3809% ( 6) 00:08:54.860 31162.500 - 31373.057: 99.4251% ( 6) 00:08:54.860 31373.057 - 31583.614: 99.4693% ( 6) 00:08:54.860 31583.614 - 31794.172: 99.5136% ( 6) 00:08:54.860 31794.172 - 32004.729: 99.5283% ( 2) 00:08:54.860 37689.780 - 37900.337: 99.5357% ( 1) 00:08:54.860 37900.337 - 38110.895: 99.5725% ( 5) 00:08:54.860 38110.895 - 38321.452: 99.6094% ( 5) 00:08:54.860 38321.452 - 38532.010: 99.6536% ( 6) 00:08:54.860 38532.010 - 38742.567: 99.6978% ( 6) 00:08:54.860 38742.567 - 38953.124: 99.7420% ( 6) 00:08:54.860 38953.124 - 39163.682: 99.7863% ( 6) 00:08:54.860 39163.682 - 39374.239: 99.8305% ( 6) 00:08:54.860 39374.239 - 39584.797: 99.8747% ( 6) 00:08:54.860 39584.797 - 39795.354: 99.9189% ( 6) 00:08:54.860 39795.354 - 40005.912: 99.9558% ( 5) 00:08:54.860 40005.912 - 40216.469: 100.0000% ( 6) 00:08:54.860 00:08:54.860 10:45:43 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:56.244 Initializing NVMe Controllers 00:08:56.244 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.244 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.244 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.244 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.245 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:56.245 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:56.245 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:56.245 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:56.245 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:56.245 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:56.245 Initialization complete. Launching workers. 00:08:56.245 ======================================================== 00:08:56.245 Latency(us) 00:08:56.245 Device Information : IOPS MiB/s Average min max 00:08:56.245 PCIE (0000:00:10.0) NSID 1 from core 0: 12401.09 145.33 10346.12 6877.82 70410.65 00:08:56.245 PCIE (0000:00:11.0) NSID 1 from core 0: 12401.09 145.33 10330.25 7119.43 72853.19 00:08:56.245 PCIE (0000:00:13.0) NSID 1 from core 0: 12401.09 145.33 10314.49 6945.53 75585.39 00:08:56.245 PCIE (0000:00:12.0) NSID 1 from core 0: 12401.09 145.33 10298.59 7019.13 77782.91 00:08:56.245 PCIE (0000:00:12.0) NSID 2 from core 0: 12401.09 145.33 10283.58 7001.63 79334.73 00:08:56.245 PCIE (0000:00:12.0) NSID 3 from core 0: 12401.09 145.33 10268.53 7071.27 81785.32 00:08:56.245 ======================================================== 00:08:56.245 Total : 74406.56 871.95 10306.93 6877.82 81785.32 00:08:56.245 00:08:56.245 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:56.245 ================================================================================= 00:08:56.245 1.00000% : 7316.871us 00:08:56.245 10.00000% : 7632.707us 00:08:56.245 25.00000% : 7895.904us 00:08:56.245 50.00000% : 8369.658us 00:08:56.245 75.00000% : 9211.888us 00:08:56.245 90.00000% : 9896.199us 00:08:56.245 95.00000% : 13212.479us 00:08:56.245 98.00000% : 61482.769us 00:08:56.245 99.00000% : 64851.688us 00:08:56.245 99.50000% : 67378.378us 00:08:56.245 99.90000% : 69483.952us 00:08:56.245 99.99000% : 70326.182us 00:08:56.245 99.99900% : 70747.296us 00:08:56.245 99.99990% : 70747.296us 00:08:56.245 99.99999% : 70747.296us 00:08:56.245 00:08:56.245 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:56.245 ================================================================================= 00:08:56.245 1.00000% : 7369.510us 00:08:56.245 10.00000% : 7685.346us 00:08:56.245 25.00000% : 7895.904us 00:08:56.245 50.00000% : 8369.658us 00:08:56.245 75.00000% : 9211.888us 00:08:56.245 90.00000% : 9896.199us 00:08:56.245 95.00000% : 12949.282us 00:08:56.245 98.00000% : 61482.769us 00:08:56.245 99.00000% : 63167.229us 00:08:56.245 99.50000% : 66957.263us 00:08:56.245 99.90000% : 72010.641us 00:08:56.245 99.99000% : 72852.871us 00:08:56.245 99.99900% : 73273.986us 00:08:56.245 99.99990% : 73273.986us 00:08:56.245 99.99999% : 73273.986us 00:08:56.245 00:08:56.245 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:56.245 ================================================================================= 00:08:56.245 1.00000% : 7316.871us 00:08:56.245 10.00000% : 7685.346us 00:08:56.245 25.00000% : 7948.543us 00:08:56.245 50.00000% : 8317.018us 00:08:56.245 75.00000% : 9211.888us 00:08:56.245 90.00000% : 9790.920us 00:08:56.245 95.00000% : 12949.282us 00:08:56.245 98.00000% : 60640.540us 00:08:56.245 99.00000% : 63167.229us 00:08:56.245 99.50000% : 69483.952us 00:08:56.245 99.90000% : 74537.330us 00:08:56.245 99.99000% : 75800.675us 00:08:56.245 99.99900% : 75800.675us 00:08:56.245 99.99990% : 75800.675us 00:08:56.245 99.99999% : 75800.675us 00:08:56.245 00:08:56.245 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:56.245 ================================================================================= 00:08:56.245 1.00000% : 7369.510us 00:08:56.245 10.00000% : 7685.346us 00:08:56.245 25.00000% : 7948.543us 00:08:56.245 50.00000% : 8317.018us 00:08:56.245 75.00000% : 9211.888us 00:08:56.245 90.00000% : 9843.560us 00:08:56.245 95.00000% : 12528.167us 00:08:56.245 98.00000% : 60219.425us 00:08:56.245 99.00000% : 63588.344us 00:08:56.245 99.50000% : 72010.641us 00:08:56.245 99.90000% : 76642.904us 00:08:56.245 99.99000% : 77906.249us 00:08:56.245 99.99900% : 77906.249us 00:08:56.245 99.99990% : 77906.249us 00:08:56.245 99.99999% : 77906.249us 00:08:56.245 00:08:56.245 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:56.245 ================================================================================= 00:08:56.245 1.00000% : 7369.510us 00:08:56.245 10.00000% : 7685.346us 00:08:56.245 25.00000% : 7948.543us 00:08:56.245 50.00000% : 8369.658us 00:08:56.245 75.00000% : 9264.527us 00:08:56.245 90.00000% : 9790.920us 00:08:56.245 95.00000% : 12475.528us 00:08:56.245 98.00000% : 59798.310us 00:08:56.245 99.00000% : 63588.344us 00:08:56.245 99.50000% : 74116.215us 00:08:56.245 99.90000% : 77906.249us 00:08:56.245 99.99000% : 79590.708us 00:08:56.245 99.99900% : 79590.708us 00:08:56.245 99.99990% : 79590.708us 00:08:56.245 99.99999% : 79590.708us 00:08:56.245 00:08:56.245 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:56.245 ================================================================================= 00:08:56.245 1.00000% : 7369.510us 00:08:56.245 10.00000% : 7685.346us 00:08:56.245 25.00000% : 7895.904us 00:08:56.245 50.00000% : 8317.018us 00:08:56.245 75.00000% : 9211.888us 00:08:56.245 90.00000% : 9790.920us 00:08:56.245 95.00000% : 12949.282us 00:08:56.245 98.00000% : 60219.425us 00:08:56.245 99.00000% : 63588.344us 00:08:56.245 99.50000% : 76221.790us 00:08:56.245 99.90000% : 80854.053us 00:08:56.245 99.99000% : 81696.283us 00:08:56.245 99.99900% : 82117.398us 00:08:56.245 99.99990% : 82117.398us 00:08:56.245 99.99999% : 82117.398us 00:08:56.245 00:08:56.245 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:56.245 ============================================================================== 00:08:56.245 Range in us Cumulative IO count 00:08:56.245 6843.116 - 6895.756: 0.0081% ( 1) 00:08:56.245 7001.035 - 7053.674: 0.0483% ( 5) 00:08:56.245 7053.674 - 7106.313: 0.0886% ( 5) 00:08:56.245 7106.313 - 7158.953: 0.1691% ( 10) 00:08:56.245 7158.953 - 7211.592: 0.2819% ( 14) 00:08:56.245 7211.592 - 7264.231: 0.6041% ( 40) 00:08:56.245 7264.231 - 7316.871: 1.1034% ( 62) 00:08:56.245 7316.871 - 7369.510: 1.8283% ( 90) 00:08:56.245 7369.510 - 7422.149: 2.8189% ( 123) 00:08:56.245 7422.149 - 7474.789: 4.1720% ( 168) 00:08:56.245 7474.789 - 7527.428: 5.8715% ( 211) 00:08:56.245 7527.428 - 7580.067: 7.7642% ( 235) 00:08:56.245 7580.067 - 7632.707: 10.0757% ( 287) 00:08:56.245 7632.707 - 7685.346: 13.0155% ( 365) 00:08:56.245 7685.346 - 7737.986: 16.5432% ( 438) 00:08:56.245 7737.986 - 7790.625: 19.3702% ( 351) 00:08:56.245 7790.625 - 7843.264: 22.2052% ( 352) 00:08:56.245 7843.264 - 7895.904: 25.7007% ( 434) 00:08:56.245 7895.904 - 7948.543: 28.9626% ( 405) 00:08:56.245 7948.543 - 8001.182: 32.2245% ( 405) 00:08:56.245 8001.182 - 8053.822: 35.1804% ( 367) 00:08:56.245 8053.822 - 8106.461: 37.8947% ( 337) 00:08:56.245 8106.461 - 8159.100: 41.0921% ( 397) 00:08:56.245 8159.100 - 8211.740: 43.7178% ( 326) 00:08:56.245 8211.740 - 8264.379: 46.0213% ( 286) 00:08:56.245 8264.379 - 8317.018: 48.7677% ( 341) 00:08:56.245 8317.018 - 8369.658: 52.0860% ( 412) 00:08:56.245 8369.658 - 8422.297: 54.9855% ( 360) 00:08:56.245 8422.297 - 8474.937: 57.2970% ( 287) 00:08:56.245 8474.937 - 8527.576: 59.3347% ( 253) 00:08:56.245 8527.576 - 8580.215: 61.7107% ( 295) 00:08:56.245 8580.215 - 8632.855: 63.2490% ( 191) 00:08:56.245 8632.855 - 8685.494: 64.7068% ( 181) 00:08:56.245 8685.494 - 8738.133: 66.1485% ( 179) 00:08:56.245 8738.133 - 8790.773: 67.2197% ( 133) 00:08:56.245 8790.773 - 8843.412: 68.2426% ( 127) 00:08:56.245 8843.412 - 8896.051: 68.9030% ( 82) 00:08:56.245 8896.051 - 8948.691: 69.5635% ( 82) 00:08:56.245 8948.691 - 9001.330: 70.4736% ( 113) 00:08:56.245 9001.330 - 9053.969: 71.4803% ( 125) 00:08:56.245 9053.969 - 9106.609: 72.7529% ( 158) 00:08:56.245 9106.609 - 9159.248: 74.1221% ( 170) 00:08:56.245 9159.248 - 9211.888: 75.4913% ( 170) 00:08:56.245 9211.888 - 9264.527: 76.8283% ( 166) 00:08:56.245 9264.527 - 9317.166: 78.3425% ( 188) 00:08:56.245 9317.166 - 9369.806: 79.6956% ( 168) 00:08:56.245 9369.806 - 9422.445: 81.3225% ( 202) 00:08:56.245 9422.445 - 9475.084: 83.1186% ( 223) 00:08:56.245 9475.084 - 9527.724: 84.3106% ( 148) 00:08:56.245 9527.724 - 9580.363: 85.5831% ( 158) 00:08:56.245 9580.363 - 9633.002: 86.5818% ( 124) 00:08:56.245 9633.002 - 9685.642: 87.4356% ( 106) 00:08:56.245 9685.642 - 9738.281: 88.3054% ( 108) 00:08:56.245 9738.281 - 9790.920: 88.9417% ( 79) 00:08:56.245 9790.920 - 9843.560: 89.5860% ( 80) 00:08:56.245 9843.560 - 9896.199: 90.3995% ( 101) 00:08:56.245 9896.199 - 9948.839: 91.0116% ( 76) 00:08:56.245 9948.839 - 10001.478: 91.3821% ( 46) 00:08:56.245 10001.478 - 10054.117: 91.6479% ( 33) 00:08:56.245 10054.117 - 10106.757: 92.1311% ( 60) 00:08:56.245 10106.757 - 10159.396: 92.6063% ( 59) 00:08:56.245 10159.396 - 10212.035: 92.7996% ( 24) 00:08:56.245 10212.035 - 10264.675: 92.9688% ( 21) 00:08:56.245 10264.675 - 10317.314: 93.0815% ( 14) 00:08:56.245 10317.314 - 10369.953: 93.1862% ( 13) 00:08:56.245 10369.953 - 10422.593: 93.2587% ( 9) 00:08:56.245 10422.593 - 10475.232: 93.3553% ( 12) 00:08:56.245 10475.232 - 10527.871: 93.4278% ( 9) 00:08:56.245 10527.871 - 10580.511: 93.5003% ( 9) 00:08:56.245 10580.511 - 10633.150: 93.5567% ( 7) 00:08:56.245 10633.150 - 10685.790: 93.6211% ( 8) 00:08:56.245 10685.790 - 10738.429: 93.7097% ( 11) 00:08:56.246 10738.429 - 10791.068: 93.8225% ( 14) 00:08:56.246 10791.068 - 10843.708: 93.8628% ( 5) 00:08:56.246 10843.708 - 10896.347: 93.8869% ( 3) 00:08:56.246 10948.986 - 11001.626: 93.9111% ( 3) 00:08:56.246 11001.626 - 11054.265: 93.9433% ( 4) 00:08:56.246 11054.265 - 11106.904: 93.9755% ( 4) 00:08:56.246 11106.904 - 11159.544: 94.0399% ( 8) 00:08:56.246 11159.544 - 11212.183: 94.0802% ( 5) 00:08:56.246 11212.183 - 11264.822: 94.1205% ( 5) 00:08:56.246 11264.822 - 11317.462: 94.1688% ( 6) 00:08:56.246 11317.462 - 11370.101: 94.2332% ( 8) 00:08:56.246 11370.101 - 11422.741: 94.2574% ( 3) 00:08:56.246 11422.741 - 11475.380: 94.3057% ( 6) 00:08:56.246 11475.380 - 11528.019: 94.3621% ( 7) 00:08:56.246 11528.019 - 11580.659: 94.4346% ( 9) 00:08:56.246 11580.659 - 11633.298: 94.4588% ( 3) 00:08:56.246 11633.298 - 11685.937: 94.5393% ( 10) 00:08:56.246 11685.937 - 11738.577: 94.5715% ( 4) 00:08:56.246 11738.577 - 11791.216: 94.6198% ( 6) 00:08:56.246 11791.216 - 11843.855: 94.6521% ( 4) 00:08:56.246 11843.855 - 11896.495: 94.6762% ( 3) 00:08:56.246 11896.495 - 11949.134: 94.6923% ( 2) 00:08:56.246 11949.134 - 12001.773: 94.7245% ( 4) 00:08:56.246 12001.773 - 12054.413: 94.7407% ( 2) 00:08:56.246 12054.413 - 12107.052: 94.7568% ( 2) 00:08:56.246 12107.052 - 12159.692: 94.7729% ( 2) 00:08:56.246 12159.692 - 12212.331: 94.7890% ( 2) 00:08:56.246 12212.331 - 12264.970: 94.8212% ( 4) 00:08:56.246 12264.970 - 12317.610: 94.8454% ( 3) 00:08:56.246 12949.282 - 13001.921: 94.8615% ( 2) 00:08:56.246 13001.921 - 13054.561: 94.8856% ( 3) 00:08:56.246 13054.561 - 13107.200: 94.9259% ( 5) 00:08:56.246 13107.200 - 13159.839: 94.9823% ( 7) 00:08:56.246 13159.839 - 13212.479: 95.0226% ( 5) 00:08:56.246 13212.479 - 13265.118: 95.0628% ( 5) 00:08:56.246 13265.118 - 13317.757: 95.0870% ( 3) 00:08:56.246 13317.757 - 13370.397: 95.1111% ( 3) 00:08:56.246 13423.036 - 13475.676: 95.1273% ( 2) 00:08:56.246 13475.676 - 13580.954: 95.1595% ( 4) 00:08:56.246 13580.954 - 13686.233: 95.1756% ( 2) 00:08:56.246 13686.233 - 13791.512: 95.1836% ( 1) 00:08:56.246 13791.512 - 13896.790: 95.2078% ( 3) 00:08:56.246 13896.790 - 14002.069: 95.2320% ( 3) 00:08:56.246 14002.069 - 14107.348: 95.2722% ( 5) 00:08:56.246 14107.348 - 14212.627: 95.3286% ( 7) 00:08:56.246 14212.627 - 14317.905: 95.3447% ( 2) 00:08:56.246 14528.463 - 14633.741: 95.3608% ( 2) 00:08:56.246 15897.086 - 16002.365: 95.3850% ( 3) 00:08:56.246 16002.365 - 16107.643: 95.4414% ( 7) 00:08:56.246 16107.643 - 16212.922: 95.4655% ( 3) 00:08:56.246 16212.922 - 16318.201: 95.5139% ( 6) 00:08:56.246 16318.201 - 16423.480: 95.5380% ( 3) 00:08:56.246 16423.480 - 16528.758: 95.5622% ( 3) 00:08:56.246 16528.758 - 16634.037: 95.5863% ( 3) 00:08:56.246 16634.037 - 16739.316: 95.6024% ( 2) 00:08:56.246 16739.316 - 16844.594: 95.6427% ( 5) 00:08:56.246 16844.594 - 16949.873: 95.6830% ( 5) 00:08:56.246 17160.431 - 17265.709: 95.6991% ( 2) 00:08:56.246 17265.709 - 17370.988: 95.7152% ( 2) 00:08:56.246 17370.988 - 17476.267: 95.7233% ( 1) 00:08:56.246 17476.267 - 17581.545: 95.7555% ( 4) 00:08:56.246 17581.545 - 17686.824: 95.7635% ( 1) 00:08:56.246 17686.824 - 17792.103: 95.7877% ( 3) 00:08:56.246 17792.103 - 17897.382: 95.8119% ( 3) 00:08:56.246 17897.382 - 18002.660: 95.8280% ( 2) 00:08:56.246 18002.660 - 18107.939: 95.8521% ( 3) 00:08:56.246 18107.939 - 18213.218: 95.8763% ( 3) 00:08:56.246 18739.611 - 18844.890: 95.8924% ( 2) 00:08:56.246 18844.890 - 18950.169: 95.9085% ( 2) 00:08:56.246 18950.169 - 19055.447: 95.9649% ( 7) 00:08:56.246 19055.447 - 19160.726: 96.0052% ( 5) 00:08:56.246 19160.726 - 19266.005: 96.0454% ( 5) 00:08:56.246 19266.005 - 19371.284: 96.0615% ( 2) 00:08:56.246 19371.284 - 19476.562: 96.0776% ( 2) 00:08:56.246 19476.562 - 19581.841: 96.1018% ( 3) 00:08:56.246 19581.841 - 19687.120: 96.1179% ( 2) 00:08:56.246 19687.120 - 19792.398: 96.1260% ( 1) 00:08:56.246 19792.398 - 19897.677: 96.1501% ( 3) 00:08:56.246 19897.677 - 20002.956: 96.1743% ( 3) 00:08:56.246 20213.513 - 20318.792: 96.1904% ( 2) 00:08:56.246 20424.071 - 20529.349: 96.2146% ( 3) 00:08:56.246 20529.349 - 20634.628: 96.2226% ( 1) 00:08:56.246 20634.628 - 20739.907: 96.2307% ( 1) 00:08:56.246 21161.022 - 21266.300: 96.2387% ( 1) 00:08:56.246 21371.579 - 21476.858: 96.2548% ( 2) 00:08:56.246 21476.858 - 21582.137: 96.2629% ( 1) 00:08:56.246 21582.137 - 21687.415: 96.2709% ( 1) 00:08:56.246 21687.415 - 21792.694: 96.2790% ( 1) 00:08:56.246 21792.694 - 21897.973: 96.2870% ( 1) 00:08:56.246 21897.973 - 22003.251: 96.2951% ( 1) 00:08:56.246 22108.530 - 22213.809: 96.3032% ( 1) 00:08:56.246 22213.809 - 22319.088: 96.3112% ( 1) 00:08:56.246 22319.088 - 22424.366: 96.3193% ( 1) 00:08:56.246 22424.366 - 22529.645: 96.3273% ( 1) 00:08:56.246 22529.645 - 22634.924: 96.3354% ( 1) 00:08:56.246 22634.924 - 22740.202: 96.3434% ( 1) 00:08:56.246 22740.202 - 22845.481: 96.3595% ( 2) 00:08:56.246 22845.481 - 22950.760: 96.3676% ( 1) 00:08:56.246 23056.039 - 23161.317: 96.3756% ( 1) 00:08:56.246 25793.285 - 25898.564: 96.3837% ( 1) 00:08:56.246 26003.843 - 26109.121: 96.3918% ( 1) 00:08:56.246 30951.942 - 31162.500: 96.4079% ( 2) 00:08:56.246 31162.500 - 31373.057: 96.4240% ( 2) 00:08:56.246 31373.057 - 31583.614: 96.4320% ( 1) 00:08:56.246 31583.614 - 31794.172: 96.4481% ( 2) 00:08:56.246 31794.172 - 32004.729: 96.4642% ( 2) 00:08:56.246 32004.729 - 32215.287: 96.4803% ( 2) 00:08:56.246 32215.287 - 32425.844: 96.4884% ( 1) 00:08:56.246 32425.844 - 32636.402: 96.5126% ( 3) 00:08:56.246 32636.402 - 32846.959: 96.5206% ( 1) 00:08:56.246 32846.959 - 33057.516: 96.5367% ( 2) 00:08:56.246 33057.516 - 33268.074: 96.5528% ( 2) 00:08:56.246 33268.074 - 33478.631: 96.5609% ( 1) 00:08:56.246 33478.631 - 33689.189: 96.5851% ( 3) 00:08:56.246 33689.189 - 33899.746: 96.5931% ( 1) 00:08:56.246 33899.746 - 34110.304: 96.6092% ( 2) 00:08:56.246 34110.304 - 34320.861: 96.6253% ( 2) 00:08:56.246 34320.861 - 34531.418: 96.6414% ( 2) 00:08:56.246 34531.418 - 34741.976: 96.6495% ( 1) 00:08:56.246 34741.976 - 34952.533: 96.6656% ( 2) 00:08:56.246 34952.533 - 35163.091: 96.6817% ( 2) 00:08:56.246 35163.091 - 35373.648: 96.6898% ( 1) 00:08:56.246 35373.648 - 35584.206: 96.7059% ( 2) 00:08:56.246 35584.206 - 35794.763: 96.7220% ( 2) 00:08:56.246 35794.763 - 36005.320: 96.7381% ( 2) 00:08:56.246 36005.320 - 36215.878: 96.7542% ( 2) 00:08:56.246 36215.878 - 36426.435: 96.7703% ( 2) 00:08:56.246 36426.435 - 36636.993: 96.7864% ( 2) 00:08:56.246 36636.993 - 36847.550: 96.8025% ( 2) 00:08:56.246 36847.550 - 37058.108: 96.8106% ( 1) 00:08:56.246 37058.108 - 37268.665: 96.8669% ( 7) 00:08:56.246 37268.665 - 37479.222: 96.8831% ( 2) 00:08:56.246 37479.222 - 37689.780: 96.8992% ( 2) 00:08:56.246 37689.780 - 37900.337: 96.9072% ( 1) 00:08:56.246 41058.699 - 41269.256: 96.9233% ( 2) 00:08:56.246 41269.256 - 41479.814: 96.9475% ( 3) 00:08:56.246 41479.814 - 41690.371: 96.9555% ( 1) 00:08:56.246 41690.371 - 41900.929: 96.9797% ( 3) 00:08:56.246 41900.929 - 42111.486: 97.0119% ( 4) 00:08:56.246 42111.486 - 42322.043: 97.0441% ( 4) 00:08:56.246 42322.043 - 42532.601: 97.0522% ( 1) 00:08:56.246 42532.601 - 42743.158: 97.0764% ( 3) 00:08:56.246 42743.158 - 42953.716: 97.0925% ( 2) 00:08:56.246 42953.716 - 43164.273: 97.1086% ( 2) 00:08:56.246 43164.273 - 43374.831: 97.1247% ( 2) 00:08:56.246 43374.831 - 43585.388: 97.1488% ( 3) 00:08:56.246 43585.388 - 43795.945: 97.1649% ( 2) 00:08:56.246 43795.945 - 44006.503: 97.1811% ( 2) 00:08:56.246 44006.503 - 44217.060: 97.1972% ( 2) 00:08:56.246 44217.060 - 44427.618: 97.2133% ( 2) 00:08:56.246 44427.618 - 44638.175: 97.2213% ( 1) 00:08:56.246 44638.175 - 44848.733: 97.2294% ( 1) 00:08:56.246 44848.733 - 45059.290: 97.2616% ( 4) 00:08:56.246 45059.290 - 45269.847: 97.2777% ( 2) 00:08:56.246 45269.847 - 45480.405: 97.2938% ( 2) 00:08:56.246 45480.405 - 45690.962: 97.3099% ( 2) 00:08:56.246 46112.077 - 46322.635: 97.3260% ( 2) 00:08:56.246 46322.635 - 46533.192: 97.3341% ( 1) 00:08:56.246 46533.192 - 46743.749: 97.3502% ( 2) 00:08:56.246 46743.749 - 46954.307: 97.3663% ( 2) 00:08:56.246 46954.307 - 47164.864: 97.3824% ( 2) 00:08:56.246 47164.864 - 47375.422: 97.3985% ( 2) 00:08:56.246 47375.422 - 47585.979: 97.4146% ( 2) 00:08:56.246 47585.979 - 47796.537: 97.4227% ( 1) 00:08:56.246 56008.276 - 56429.391: 97.4307% ( 1) 00:08:56.246 57271.621 - 57692.736: 97.4468% ( 2) 00:08:56.246 58956.080 - 59377.195: 97.6160% ( 21) 00:08:56.246 59377.195 - 59798.310: 97.7207% ( 13) 00:08:56.246 59798.310 - 60219.425: 97.7529% ( 4) 00:08:56.246 60219.425 - 60640.540: 97.8093% ( 7) 00:08:56.246 60640.540 - 61061.655: 97.8979% ( 11) 00:08:56.246 61061.655 - 61482.769: 98.0751% ( 22) 00:08:56.246 61482.769 - 61903.884: 98.3167% ( 30) 00:08:56.246 61903.884 - 62324.999: 98.3972% ( 10) 00:08:56.246 62324.999 - 62746.114: 98.4939% ( 12) 00:08:56.246 62746.114 - 63167.229: 98.5986% ( 13) 00:08:56.246 63167.229 - 63588.344: 98.7999% ( 25) 00:08:56.246 63588.344 - 64009.459: 98.9127% ( 14) 00:08:56.246 64009.459 - 64430.573: 98.9610% ( 6) 00:08:56.246 64430.573 - 64851.688: 99.0818% ( 15) 00:08:56.246 64851.688 - 65272.803: 99.1221% ( 5) 00:08:56.246 65272.803 - 65693.918: 99.1785% ( 7) 00:08:56.247 65693.918 - 66115.033: 99.2590% ( 10) 00:08:56.247 66115.033 - 66536.148: 99.3637% ( 13) 00:08:56.247 66536.148 - 66957.263: 99.4523% ( 11) 00:08:56.247 66957.263 - 67378.378: 99.5812% ( 16) 00:08:56.247 67378.378 - 67799.492: 99.6859% ( 13) 00:08:56.247 67799.492 - 68220.607: 99.7745% ( 11) 00:08:56.247 68220.607 - 68641.722: 99.8470% ( 9) 00:08:56.247 68641.722 - 69062.837: 99.8953% ( 6) 00:08:56.247 69062.837 - 69483.952: 99.9275% ( 4) 00:08:56.247 69483.952 - 69905.067: 99.9678% ( 5) 00:08:56.247 69905.067 - 70326.182: 99.9919% ( 3) 00:08:56.247 70326.182 - 70747.296: 100.0000% ( 1) 00:08:56.247 00:08:56.247 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:56.247 ============================================================================== 00:08:56.247 Range in us Cumulative IO count 00:08:56.247 7106.313 - 7158.953: 0.0081% ( 1) 00:08:56.247 7158.953 - 7211.592: 0.0564% ( 6) 00:08:56.247 7211.592 - 7264.231: 0.1852% ( 16) 00:08:56.247 7264.231 - 7316.871: 0.4349% ( 31) 00:08:56.247 7316.871 - 7369.510: 1.1759% ( 92) 00:08:56.247 7369.510 - 7422.149: 1.7800% ( 75) 00:08:56.247 7422.149 - 7474.789: 3.0606% ( 159) 00:08:56.247 7474.789 - 7527.428: 5.5090% ( 304) 00:08:56.247 7527.428 - 7580.067: 7.1279% ( 201) 00:08:56.247 7580.067 - 7632.707: 9.1978% ( 257) 00:08:56.247 7632.707 - 7685.346: 11.8073% ( 324) 00:08:56.247 7685.346 - 7737.986: 14.7229% ( 362) 00:08:56.247 7737.986 - 7790.625: 17.9285% ( 398) 00:08:56.247 7790.625 - 7843.264: 21.4079% ( 432) 00:08:56.247 7843.264 - 7895.904: 25.1128% ( 460) 00:08:56.247 7895.904 - 7948.543: 28.0122% ( 360) 00:08:56.247 7948.543 - 8001.182: 31.3064% ( 409) 00:08:56.247 8001.182 - 8053.822: 34.5200% ( 399) 00:08:56.247 8053.822 - 8106.461: 38.1202% ( 447) 00:08:56.247 8106.461 - 8159.100: 41.2532% ( 389) 00:08:56.247 8159.100 - 8211.740: 43.8547% ( 323) 00:08:56.247 8211.740 - 8264.379: 46.5206% ( 331) 00:08:56.247 8264.379 - 8317.018: 49.0818% ( 318) 00:08:56.247 8317.018 - 8369.658: 53.0122% ( 488) 00:08:56.247 8369.658 - 8422.297: 56.0889% ( 382) 00:08:56.247 8422.297 - 8474.937: 58.4407% ( 292) 00:08:56.247 8474.937 - 8527.576: 60.6314% ( 272) 00:08:56.247 8527.576 - 8580.215: 63.4343% ( 348) 00:08:56.247 8580.215 - 8632.855: 65.0209% ( 197) 00:08:56.247 8632.855 - 8685.494: 65.9955% ( 121) 00:08:56.247 8685.494 - 8738.133: 66.7445% ( 93) 00:08:56.247 8738.133 - 8790.773: 67.8963% ( 143) 00:08:56.247 8790.773 - 8843.412: 68.7097% ( 101) 00:08:56.247 8843.412 - 8896.051: 69.2816% ( 71) 00:08:56.247 8896.051 - 8948.691: 70.0709% ( 98) 00:08:56.247 8948.691 - 9001.330: 70.7796% ( 88) 00:08:56.247 9001.330 - 9053.969: 71.5528% ( 96) 00:08:56.247 9053.969 - 9106.609: 72.7368% ( 147) 00:08:56.247 9106.609 - 9159.248: 74.0255% ( 160) 00:08:56.247 9159.248 - 9211.888: 75.2899% ( 157) 00:08:56.247 9211.888 - 9264.527: 76.5867% ( 161) 00:08:56.247 9264.527 - 9317.166: 77.8995% ( 163) 00:08:56.247 9317.166 - 9369.806: 79.2123% ( 163) 00:08:56.247 9369.806 - 9422.445: 81.1614% ( 242) 00:08:56.247 9422.445 - 9475.084: 82.4662% ( 162) 00:08:56.247 9475.084 - 9527.724: 83.6904% ( 152) 00:08:56.247 9527.724 - 9580.363: 84.8260% ( 141) 00:08:56.247 9580.363 - 9633.002: 86.2838% ( 181) 00:08:56.247 9633.002 - 9685.642: 87.3631% ( 134) 00:08:56.247 9685.642 - 9738.281: 88.2651% ( 112) 00:08:56.247 9738.281 - 9790.920: 89.2155% ( 118) 00:08:56.247 9790.920 - 9843.560: 89.9243% ( 88) 00:08:56.247 9843.560 - 9896.199: 90.5122% ( 73) 00:08:56.247 9896.199 - 9948.839: 91.1082% ( 74) 00:08:56.247 9948.839 - 10001.478: 91.5673% ( 57) 00:08:56.247 10001.478 - 10054.117: 91.9137% ( 43) 00:08:56.247 10054.117 - 10106.757: 92.3647% ( 56) 00:08:56.247 10106.757 - 10159.396: 92.6949% ( 41) 00:08:56.247 10159.396 - 10212.035: 92.8560% ( 20) 00:08:56.247 10212.035 - 10264.675: 93.0251% ( 21) 00:08:56.247 10264.675 - 10317.314: 93.1620% ( 17) 00:08:56.247 10317.314 - 10369.953: 93.2668% ( 13) 00:08:56.247 10369.953 - 10422.593: 93.3634% ( 12) 00:08:56.247 10422.593 - 10475.232: 93.4681% ( 13) 00:08:56.247 10475.232 - 10527.871: 93.5486% ( 10) 00:08:56.247 10527.871 - 10580.511: 93.6372% ( 11) 00:08:56.247 10580.511 - 10633.150: 93.6856% ( 6) 00:08:56.247 10633.150 - 10685.790: 93.7339% ( 6) 00:08:56.247 10685.790 - 10738.429: 93.7661% ( 4) 00:08:56.247 10738.429 - 10791.068: 93.8064% ( 5) 00:08:56.247 10791.068 - 10843.708: 93.8466% ( 5) 00:08:56.247 10843.708 - 10896.347: 93.8950% ( 6) 00:08:56.247 10896.347 - 10948.986: 93.9755% ( 10) 00:08:56.247 10948.986 - 11001.626: 94.0963% ( 15) 00:08:56.247 11001.626 - 11054.265: 94.1930% ( 12) 00:08:56.247 11054.265 - 11106.904: 94.3057% ( 14) 00:08:56.247 11106.904 - 11159.544: 94.3218% ( 2) 00:08:56.247 11159.544 - 11212.183: 94.3299% ( 1) 00:08:56.247 11580.659 - 11633.298: 94.3460% ( 2) 00:08:56.247 11685.937 - 11738.577: 94.3541% ( 1) 00:08:56.247 11738.577 - 11791.216: 94.3782% ( 3) 00:08:56.247 11791.216 - 11843.855: 94.3943% ( 2) 00:08:56.247 11843.855 - 11896.495: 94.4185% ( 3) 00:08:56.247 11896.495 - 11949.134: 94.4507% ( 4) 00:08:56.247 11949.134 - 12001.773: 94.4910% ( 5) 00:08:56.247 12001.773 - 12054.413: 94.5393% ( 6) 00:08:56.247 12054.413 - 12107.052: 94.5876% ( 6) 00:08:56.247 12107.052 - 12159.692: 94.6279% ( 5) 00:08:56.247 12159.692 - 12212.331: 94.6682% ( 5) 00:08:56.247 12212.331 - 12264.970: 94.6923% ( 3) 00:08:56.247 12264.970 - 12317.610: 94.7084% ( 2) 00:08:56.247 12317.610 - 12370.249: 94.7165% ( 1) 00:08:56.247 12370.249 - 12422.888: 94.7407% ( 3) 00:08:56.247 12422.888 - 12475.528: 94.7568% ( 2) 00:08:56.247 12475.528 - 12528.167: 94.7729% ( 2) 00:08:56.247 12528.167 - 12580.806: 94.7970% ( 3) 00:08:56.247 12580.806 - 12633.446: 94.8131% ( 2) 00:08:56.247 12633.446 - 12686.085: 94.8293% ( 2) 00:08:56.247 12686.085 - 12738.724: 94.8454% ( 2) 00:08:56.247 12844.003 - 12896.643: 94.9178% ( 9) 00:08:56.247 12896.643 - 12949.282: 95.0548% ( 17) 00:08:56.247 12949.282 - 13001.921: 95.1595% ( 13) 00:08:56.247 13001.921 - 13054.561: 95.1836% ( 3) 00:08:56.247 13054.561 - 13107.200: 95.1997% ( 2) 00:08:56.247 13107.200 - 13159.839: 95.2159% ( 2) 00:08:56.247 13159.839 - 13212.479: 95.2320% ( 2) 00:08:56.247 13212.479 - 13265.118: 95.2481% ( 2) 00:08:56.247 13265.118 - 13317.757: 95.2561% ( 1) 00:08:56.247 13317.757 - 13370.397: 95.2722% ( 2) 00:08:56.247 13370.397 - 13423.036: 95.2803% ( 1) 00:08:56.247 13423.036 - 13475.676: 95.2964% ( 2) 00:08:56.247 13475.676 - 13580.954: 95.3206% ( 3) 00:08:56.247 13580.954 - 13686.233: 95.3447% ( 3) 00:08:56.247 13686.233 - 13791.512: 95.3608% ( 2) 00:08:56.247 15897.086 - 16002.365: 95.3930% ( 4) 00:08:56.247 16002.365 - 16107.643: 95.4575% ( 8) 00:08:56.247 16107.643 - 16212.922: 95.5219% ( 8) 00:08:56.247 16212.922 - 16318.201: 95.5783% ( 7) 00:08:56.247 16318.201 - 16423.480: 95.6105% ( 4) 00:08:56.247 16423.480 - 16528.758: 95.6347% ( 3) 00:08:56.247 16528.758 - 16634.037: 95.6588% ( 3) 00:08:56.247 16634.037 - 16739.316: 95.6830% ( 3) 00:08:56.247 16739.316 - 16844.594: 95.7072% ( 3) 00:08:56.247 16844.594 - 16949.873: 95.7233% ( 2) 00:08:56.247 16949.873 - 17055.152: 95.7474% ( 3) 00:08:56.247 17055.152 - 17160.431: 95.7635% ( 2) 00:08:56.247 17160.431 - 17265.709: 95.7796% ( 2) 00:08:56.247 17265.709 - 17370.988: 95.8038% ( 3) 00:08:56.247 17370.988 - 17476.267: 95.8280% ( 3) 00:08:56.247 17476.267 - 17581.545: 95.8521% ( 3) 00:08:56.247 17581.545 - 17686.824: 95.8763% ( 3) 00:08:56.247 18950.169 - 19055.447: 95.8843% ( 1) 00:08:56.247 19055.447 - 19160.726: 95.9729% ( 11) 00:08:56.247 19160.726 - 19266.005: 96.0454% ( 9) 00:08:56.247 19266.005 - 19371.284: 96.1018% ( 7) 00:08:56.247 19371.284 - 19476.562: 96.1823% ( 10) 00:08:56.247 19476.562 - 19581.841: 96.2146% ( 4) 00:08:56.247 19581.841 - 19687.120: 96.2387% ( 3) 00:08:56.247 19687.120 - 19792.398: 96.2548% ( 2) 00:08:56.247 19792.398 - 19897.677: 96.2709% ( 2) 00:08:56.247 19897.677 - 20002.956: 96.2870% ( 2) 00:08:56.247 20002.956 - 20108.235: 96.3032% ( 2) 00:08:56.247 20108.235 - 20213.513: 96.3273% ( 3) 00:08:56.247 20213.513 - 20318.792: 96.3434% ( 2) 00:08:56.247 20318.792 - 20424.071: 96.3595% ( 2) 00:08:56.247 20424.071 - 20529.349: 96.3837% ( 3) 00:08:56.247 20529.349 - 20634.628: 96.3918% ( 1) 00:08:56.247 33899.746 - 34110.304: 96.4079% ( 2) 00:08:56.247 34110.304 - 34320.861: 96.4159% ( 1) 00:08:56.247 34320.861 - 34531.418: 96.4401% ( 3) 00:08:56.247 34531.418 - 34741.976: 96.4562% ( 2) 00:08:56.247 34741.976 - 34952.533: 96.4723% ( 2) 00:08:56.247 34952.533 - 35163.091: 96.4884% ( 2) 00:08:56.247 35163.091 - 35373.648: 96.5126% ( 3) 00:08:56.247 35373.648 - 35584.206: 96.5287% ( 2) 00:08:56.247 35584.206 - 35794.763: 96.5448% ( 2) 00:08:56.247 35794.763 - 36005.320: 96.5609% ( 2) 00:08:56.247 36005.320 - 36215.878: 96.5770% ( 2) 00:08:56.247 36215.878 - 36426.435: 96.5931% ( 2) 00:08:56.247 36426.435 - 36636.993: 96.6173% ( 3) 00:08:56.247 36636.993 - 36847.550: 96.6334% ( 2) 00:08:56.247 36847.550 - 37058.108: 96.6495% ( 2) 00:08:56.247 37058.108 - 37268.665: 96.6656% ( 2) 00:08:56.247 37268.665 - 37479.222: 96.6817% ( 2) 00:08:56.247 37479.222 - 37689.780: 96.6978% ( 2) 00:08:56.247 37689.780 - 37900.337: 96.7139% ( 2) 00:08:56.248 37900.337 - 38110.895: 96.7461% ( 4) 00:08:56.248 38110.895 - 38321.452: 96.7864% ( 5) 00:08:56.248 38321.452 - 38532.010: 96.8428% ( 7) 00:08:56.248 38532.010 - 38742.567: 96.9072% ( 8) 00:08:56.248 38742.567 - 38953.124: 96.9636% ( 7) 00:08:56.248 38953.124 - 39163.682: 97.0441% ( 10) 00:08:56.248 39163.682 - 39374.239: 97.0844% ( 5) 00:08:56.248 39374.239 - 39584.797: 97.1166% ( 4) 00:08:56.248 39584.797 - 39795.354: 97.1488% ( 4) 00:08:56.248 39795.354 - 40005.912: 97.1811% ( 4) 00:08:56.248 40005.912 - 40216.469: 97.2133% ( 4) 00:08:56.248 40216.469 - 40427.027: 97.2294% ( 2) 00:08:56.248 40427.027 - 40637.584: 97.2455% ( 2) 00:08:56.248 40637.584 - 40848.141: 97.2616% ( 2) 00:08:56.248 40848.141 - 41058.699: 97.2858% ( 3) 00:08:56.248 41058.699 - 41269.256: 97.3019% ( 2) 00:08:56.248 41269.256 - 41479.814: 97.3180% ( 2) 00:08:56.248 41479.814 - 41690.371: 97.3341% ( 2) 00:08:56.248 41690.371 - 41900.929: 97.3502% ( 2) 00:08:56.248 41900.929 - 42111.486: 97.3663% ( 2) 00:08:56.248 42111.486 - 42322.043: 97.3744% ( 1) 00:08:56.248 42322.043 - 42532.601: 97.3905% ( 2) 00:08:56.248 42532.601 - 42743.158: 97.4066% ( 2) 00:08:56.248 42743.158 - 42953.716: 97.4227% ( 2) 00:08:56.248 58534.965 - 58956.080: 97.4307% ( 1) 00:08:56.248 58956.080 - 59377.195: 97.4952% ( 8) 00:08:56.248 59377.195 - 59798.310: 97.5515% ( 7) 00:08:56.248 59798.310 - 60219.425: 97.6804% ( 16) 00:08:56.248 60219.425 - 60640.540: 97.7932% ( 14) 00:08:56.248 60640.540 - 61061.655: 97.9220% ( 16) 00:08:56.248 61061.655 - 61482.769: 98.0348% ( 14) 00:08:56.248 61482.769 - 61903.884: 98.1878% ( 19) 00:08:56.248 61903.884 - 62324.999: 98.3570% ( 21) 00:08:56.248 62324.999 - 62746.114: 98.6469% ( 36) 00:08:56.248 62746.114 - 63167.229: 99.0818% ( 54) 00:08:56.248 63167.229 - 63588.344: 99.1785% ( 12) 00:08:56.248 63588.344 - 64009.459: 99.2349% ( 7) 00:08:56.248 64009.459 - 64430.573: 99.2912% ( 7) 00:08:56.248 64430.573 - 64851.688: 99.3235% ( 4) 00:08:56.248 64851.688 - 65272.803: 99.3557% ( 4) 00:08:56.248 65272.803 - 65693.918: 99.3879% ( 4) 00:08:56.248 65693.918 - 66115.033: 99.4201% ( 4) 00:08:56.248 66115.033 - 66536.148: 99.4604% ( 5) 00:08:56.248 66536.148 - 66957.263: 99.5087% ( 6) 00:08:56.248 66957.263 - 67378.378: 99.5490% ( 5) 00:08:56.248 67378.378 - 67799.492: 99.5892% ( 5) 00:08:56.248 67799.492 - 68220.607: 99.6215% ( 4) 00:08:56.248 68220.607 - 68641.722: 99.6537% ( 4) 00:08:56.248 68641.722 - 69062.837: 99.6939% ( 5) 00:08:56.248 69062.837 - 69483.952: 99.7181% ( 3) 00:08:56.248 69483.952 - 69905.067: 99.7503% ( 4) 00:08:56.248 69905.067 - 70326.182: 99.7825% ( 4) 00:08:56.248 70326.182 - 70747.296: 99.8228% ( 5) 00:08:56.248 70747.296 - 71168.411: 99.8550% ( 4) 00:08:56.248 71168.411 - 71589.526: 99.8872% ( 4) 00:08:56.248 71589.526 - 72010.641: 99.9195% ( 4) 00:08:56.248 72010.641 - 72431.756: 99.9597% ( 5) 00:08:56.248 72431.756 - 72852.871: 99.9919% ( 4) 00:08:56.248 72852.871 - 73273.986: 100.0000% ( 1) 00:08:56.248 00:08:56.248 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:56.248 ============================================================================== 00:08:56.248 Range in us Cumulative IO count 00:08:56.248 6895.756 - 6948.395: 0.0081% ( 1) 00:08:56.248 7001.035 - 7053.674: 0.0322% ( 3) 00:08:56.248 7053.674 - 7106.313: 0.0805% ( 6) 00:08:56.248 7106.313 - 7158.953: 0.1852% ( 13) 00:08:56.248 7158.953 - 7211.592: 0.3866% ( 25) 00:08:56.248 7211.592 - 7264.231: 0.6282% ( 30) 00:08:56.248 7264.231 - 7316.871: 1.0470% ( 52) 00:08:56.248 7316.871 - 7369.510: 1.7155% ( 83) 00:08:56.248 7369.510 - 7422.149: 2.6820% ( 120) 00:08:56.248 7422.149 - 7474.789: 3.9224% ( 154) 00:08:56.248 7474.789 - 7527.428: 5.1144% ( 148) 00:08:56.248 7527.428 - 7580.067: 6.9265% ( 225) 00:08:56.248 7580.067 - 7632.707: 8.8515% ( 239) 00:08:56.248 7632.707 - 7685.346: 10.8731% ( 251) 00:08:56.248 7685.346 - 7737.986: 13.4745% ( 323) 00:08:56.248 7737.986 - 7790.625: 16.7123% ( 402) 00:08:56.248 7790.625 - 7843.264: 20.1514% ( 427) 00:08:56.248 7843.264 - 7895.904: 23.1153% ( 368) 00:08:56.248 7895.904 - 7948.543: 26.9733% ( 479) 00:08:56.248 7948.543 - 8001.182: 30.3560% ( 420) 00:08:56.248 8001.182 - 8053.822: 33.7870% ( 426) 00:08:56.248 8053.822 - 8106.461: 38.2974% ( 560) 00:08:56.248 8106.461 - 8159.100: 42.4613% ( 517) 00:08:56.248 8159.100 - 8211.740: 46.1823% ( 462) 00:08:56.248 8211.740 - 8264.379: 49.2671% ( 383) 00:08:56.248 8264.379 - 8317.018: 52.5209% ( 404) 00:08:56.248 8317.018 - 8369.658: 54.6956% ( 270) 00:08:56.248 8369.658 - 8422.297: 56.7332% ( 253) 00:08:56.248 8422.297 - 8474.937: 58.7065% ( 245) 00:08:56.248 8474.937 - 8527.576: 59.9630% ( 156) 00:08:56.248 8527.576 - 8580.215: 61.2758% ( 163) 00:08:56.248 8580.215 - 8632.855: 62.5483% ( 158) 00:08:56.248 8632.855 - 8685.494: 63.5793% ( 128) 00:08:56.248 8685.494 - 8738.133: 64.3686% ( 98) 00:08:56.248 8738.133 - 8790.773: 65.5042% ( 141) 00:08:56.248 8790.773 - 8843.412: 66.7848% ( 159) 00:08:56.248 8843.412 - 8896.051: 68.1137% ( 165) 00:08:56.248 8896.051 - 8948.691: 69.2896% ( 146) 00:08:56.248 8948.691 - 9001.330: 70.7152% ( 177) 00:08:56.248 9001.330 - 9053.969: 71.6817% ( 120) 00:08:56.248 9053.969 - 9106.609: 72.7529% ( 133) 00:08:56.248 9106.609 - 9159.248: 74.1704% ( 176) 00:08:56.248 9159.248 - 9211.888: 75.1450% ( 121) 00:08:56.248 9211.888 - 9264.527: 76.3692% ( 152) 00:08:56.248 9264.527 - 9317.166: 77.7626% ( 173) 00:08:56.248 9317.166 - 9369.806: 79.4378% ( 208) 00:08:56.248 9369.806 - 9422.445: 81.0325% ( 198) 00:08:56.248 9422.445 - 9475.084: 82.6192% ( 197) 00:08:56.248 9475.084 - 9527.724: 84.1012% ( 184) 00:08:56.248 9527.724 - 9580.363: 85.8328% ( 215) 00:08:56.248 9580.363 - 9633.002: 86.8879% ( 131) 00:08:56.248 9633.002 - 9685.642: 88.2490% ( 169) 00:08:56.248 9685.642 - 9738.281: 89.2719% ( 127) 00:08:56.248 9738.281 - 9790.920: 90.1981% ( 115) 00:08:56.248 9790.920 - 9843.560: 90.8264% ( 78) 00:08:56.248 9843.560 - 9896.199: 91.4465% ( 77) 00:08:56.248 9896.199 - 9948.839: 91.7526% ( 38) 00:08:56.248 9948.839 - 10001.478: 92.0184% ( 33) 00:08:56.248 10001.478 - 10054.117: 92.4130% ( 49) 00:08:56.248 10054.117 - 10106.757: 92.7030% ( 36) 00:08:56.248 10106.757 - 10159.396: 92.8479% ( 18) 00:08:56.248 10159.396 - 10212.035: 93.1782% ( 41) 00:08:56.248 10212.035 - 10264.675: 93.3473% ( 21) 00:08:56.248 10264.675 - 10317.314: 93.5245% ( 22) 00:08:56.248 10317.314 - 10369.953: 93.6292% ( 13) 00:08:56.248 10369.953 - 10422.593: 93.7178% ( 11) 00:08:56.248 10422.593 - 10475.232: 93.7661% ( 6) 00:08:56.248 10475.232 - 10527.871: 93.8064% ( 5) 00:08:56.248 10527.871 - 10580.511: 93.8386% ( 4) 00:08:56.248 10580.511 - 10633.150: 93.8950% ( 7) 00:08:56.248 10633.150 - 10685.790: 93.9514% ( 7) 00:08:56.248 10685.790 - 10738.429: 94.0561% ( 13) 00:08:56.248 10738.429 - 10791.068: 94.1366% ( 10) 00:08:56.248 10791.068 - 10843.708: 94.1849% ( 6) 00:08:56.248 10843.708 - 10896.347: 94.2171% ( 4) 00:08:56.248 10896.347 - 10948.986: 94.2494% ( 4) 00:08:56.248 10948.986 - 11001.626: 94.2655% ( 2) 00:08:56.248 11001.626 - 11054.265: 94.2816% ( 2) 00:08:56.248 11054.265 - 11106.904: 94.2896% ( 1) 00:08:56.248 11106.904 - 11159.544: 94.3057% ( 2) 00:08:56.248 11159.544 - 11212.183: 94.3218% ( 2) 00:08:56.248 11212.183 - 11264.822: 94.3299% ( 1) 00:08:56.248 11264.822 - 11317.462: 94.3380% ( 1) 00:08:56.248 11317.462 - 11370.101: 94.3541% ( 2) 00:08:56.248 11370.101 - 11422.741: 94.3702% ( 2) 00:08:56.248 11422.741 - 11475.380: 94.4024% ( 4) 00:08:56.248 11475.380 - 11528.019: 94.4104% ( 1) 00:08:56.248 11528.019 - 11580.659: 94.4427% ( 4) 00:08:56.248 11580.659 - 11633.298: 94.4588% ( 2) 00:08:56.248 11633.298 - 11685.937: 94.4749% ( 2) 00:08:56.248 11685.937 - 11738.577: 94.5071% ( 4) 00:08:56.248 11738.577 - 11791.216: 94.5796% ( 9) 00:08:56.248 11791.216 - 11843.855: 94.6682% ( 11) 00:08:56.248 11843.855 - 11896.495: 94.7407% ( 9) 00:08:56.248 11896.495 - 11949.134: 94.7648% ( 3) 00:08:56.248 11949.134 - 12001.773: 94.7890% ( 3) 00:08:56.248 12001.773 - 12054.413: 94.8051% ( 2) 00:08:56.248 12054.413 - 12107.052: 94.8293% ( 3) 00:08:56.248 12107.052 - 12159.692: 94.8454% ( 2) 00:08:56.248 12686.085 - 12738.724: 94.8534% ( 1) 00:08:56.248 12738.724 - 12791.364: 94.8856% ( 4) 00:08:56.248 12791.364 - 12844.003: 94.9098% ( 3) 00:08:56.248 12844.003 - 12896.643: 94.9259% ( 2) 00:08:56.248 12896.643 - 12949.282: 95.1192% ( 24) 00:08:56.248 12949.282 - 13001.921: 95.2239% ( 13) 00:08:56.248 13001.921 - 13054.561: 95.2481% ( 3) 00:08:56.248 13054.561 - 13107.200: 95.2642% ( 2) 00:08:56.248 13107.200 - 13159.839: 95.2722% ( 1) 00:08:56.248 13159.839 - 13212.479: 95.2803% ( 1) 00:08:56.248 13212.479 - 13265.118: 95.2964% ( 2) 00:08:56.248 13265.118 - 13317.757: 95.3125% ( 2) 00:08:56.248 13317.757 - 13370.397: 95.3286% ( 2) 00:08:56.248 13370.397 - 13423.036: 95.3447% ( 2) 00:08:56.248 13423.036 - 13475.676: 95.3528% ( 1) 00:08:56.248 13580.954 - 13686.233: 95.3608% ( 1) 00:08:56.248 14949.578 - 15054.856: 95.3850% ( 3) 00:08:56.248 15054.856 - 15160.135: 95.4655% ( 10) 00:08:56.248 15160.135 - 15265.414: 95.6427% ( 22) 00:08:56.248 15265.414 - 15370.692: 95.7716% ( 16) 00:08:56.248 15370.692 - 15475.971: 95.7957% ( 3) 00:08:56.248 15475.971 - 15581.250: 95.8199% ( 3) 00:08:56.248 15581.250 - 15686.529: 95.8441% ( 3) 00:08:56.248 15686.529 - 15791.807: 95.8682% ( 3) 00:08:56.248 15791.807 - 15897.086: 95.8763% ( 1) 00:08:56.249 18002.660 - 18107.939: 95.8924% ( 2) 00:08:56.249 18107.939 - 18213.218: 95.9488% ( 7) 00:08:56.249 18213.218 - 18318.496: 96.0857% ( 17) 00:08:56.249 18318.496 - 18423.775: 96.1662% ( 10) 00:08:56.249 18423.775 - 18529.054: 96.1823% ( 2) 00:08:56.249 18529.054 - 18634.333: 96.2065% ( 3) 00:08:56.249 18634.333 - 18739.611: 96.2226% ( 2) 00:08:56.249 18739.611 - 18844.890: 96.2468% ( 3) 00:08:56.249 18844.890 - 18950.169: 96.2629% ( 2) 00:08:56.249 18950.169 - 19055.447: 96.2790% ( 2) 00:08:56.249 19055.447 - 19160.726: 96.3032% ( 3) 00:08:56.249 19160.726 - 19266.005: 96.3193% ( 2) 00:08:56.249 19266.005 - 19371.284: 96.3434% ( 3) 00:08:56.249 19371.284 - 19476.562: 96.3595% ( 2) 00:08:56.249 19476.562 - 19581.841: 96.3837% ( 3) 00:08:56.249 19581.841 - 19687.120: 96.3918% ( 1) 00:08:56.249 35584.206 - 35794.763: 96.4240% ( 4) 00:08:56.249 35794.763 - 36005.320: 96.4803% ( 7) 00:08:56.249 36005.320 - 36215.878: 96.5287% ( 6) 00:08:56.249 36215.878 - 36426.435: 96.5770% ( 6) 00:08:56.249 36426.435 - 36636.993: 96.6334% ( 7) 00:08:56.249 36636.993 - 36847.550: 96.6736% ( 5) 00:08:56.249 36847.550 - 37058.108: 96.7059% ( 4) 00:08:56.249 37058.108 - 37268.665: 96.7381% ( 4) 00:08:56.249 37268.665 - 37479.222: 96.7703% ( 4) 00:08:56.249 37479.222 - 37689.780: 96.8025% ( 4) 00:08:56.249 37689.780 - 37900.337: 96.8347% ( 4) 00:08:56.249 37900.337 - 38110.895: 96.8669% ( 4) 00:08:56.249 38110.895 - 38321.452: 96.8992% ( 4) 00:08:56.249 38321.452 - 38532.010: 96.9314% ( 4) 00:08:56.249 38532.010 - 38742.567: 96.9716% ( 5) 00:08:56.249 38742.567 - 38953.124: 97.0119% ( 5) 00:08:56.249 38953.124 - 39163.682: 97.0441% ( 4) 00:08:56.249 39163.682 - 39374.239: 97.0764% ( 4) 00:08:56.249 39374.239 - 39584.797: 97.1005% ( 3) 00:08:56.249 39584.797 - 39795.354: 97.1327% ( 4) 00:08:56.249 39795.354 - 40005.912: 97.1730% ( 5) 00:08:56.249 40005.912 - 40216.469: 97.2052% ( 4) 00:08:56.249 40216.469 - 40427.027: 97.2374% ( 4) 00:08:56.249 40427.027 - 40637.584: 97.2616% ( 3) 00:08:56.249 40637.584 - 40848.141: 97.2777% ( 2) 00:08:56.249 40848.141 - 41058.699: 97.2938% ( 2) 00:08:56.249 41058.699 - 41269.256: 97.3099% ( 2) 00:08:56.249 41269.256 - 41479.814: 97.3260% ( 2) 00:08:56.249 41479.814 - 41690.371: 97.3421% ( 2) 00:08:56.249 41690.371 - 41900.929: 97.3663% ( 3) 00:08:56.249 41900.929 - 42111.486: 97.3824% ( 2) 00:08:56.249 42111.486 - 42322.043: 97.3985% ( 2) 00:08:56.249 42322.043 - 42532.601: 97.4227% ( 3) 00:08:56.249 56008.276 - 56429.391: 97.4307% ( 1) 00:08:56.249 56429.391 - 56850.506: 97.4468% ( 2) 00:08:56.249 56850.506 - 57271.621: 97.5677% ( 15) 00:08:56.249 57271.621 - 57692.736: 97.6562% ( 11) 00:08:56.249 57692.736 - 58113.851: 97.6885% ( 4) 00:08:56.249 58113.851 - 58534.965: 97.7207% ( 4) 00:08:56.249 58534.965 - 58956.080: 97.7448% ( 3) 00:08:56.249 58956.080 - 59377.195: 97.7771% ( 4) 00:08:56.249 59377.195 - 59798.310: 97.8495% ( 9) 00:08:56.249 59798.310 - 60219.425: 97.9462% ( 12) 00:08:56.249 60219.425 - 60640.540: 98.0670% ( 15) 00:08:56.249 60640.540 - 61061.655: 98.2281% ( 20) 00:08:56.249 61061.655 - 61482.769: 98.3972% ( 21) 00:08:56.249 61482.769 - 61903.884: 98.5261% ( 16) 00:08:56.249 61903.884 - 62324.999: 98.6711% ( 18) 00:08:56.249 62324.999 - 62746.114: 98.7919% ( 15) 00:08:56.249 62746.114 - 63167.229: 99.0335% ( 30) 00:08:56.249 63167.229 - 63588.344: 99.0818% ( 6) 00:08:56.249 63588.344 - 64009.459: 99.1221% ( 5) 00:08:56.249 64009.459 - 64430.573: 99.1785% ( 7) 00:08:56.249 64430.573 - 64851.688: 99.2349% ( 7) 00:08:56.249 64851.688 - 65272.803: 99.2912% ( 7) 00:08:56.249 65272.803 - 65693.918: 99.3396% ( 6) 00:08:56.249 65693.918 - 66115.033: 99.3798% ( 5) 00:08:56.249 66115.033 - 66536.148: 99.4282% ( 6) 00:08:56.249 66536.148 - 66957.263: 99.4604% ( 4) 00:08:56.249 66957.263 - 67378.378: 99.4845% ( 3) 00:08:56.249 69062.837 - 69483.952: 99.5168% ( 4) 00:08:56.249 69483.952 - 69905.067: 99.5490% ( 4) 00:08:56.249 69905.067 - 70326.182: 99.5812% ( 4) 00:08:56.249 70326.182 - 70747.296: 99.6134% ( 4) 00:08:56.249 70747.296 - 71168.411: 99.6456% ( 4) 00:08:56.249 71168.411 - 71589.526: 99.6778% ( 4) 00:08:56.249 71589.526 - 72010.641: 99.7101% ( 4) 00:08:56.249 72010.641 - 72431.756: 99.7423% ( 4) 00:08:56.249 72431.756 - 72852.871: 99.7745% ( 4) 00:08:56.249 72852.871 - 73273.986: 99.8067% ( 4) 00:08:56.249 73273.986 - 73695.100: 99.8470% ( 5) 00:08:56.249 73695.100 - 74116.215: 99.8792% ( 4) 00:08:56.249 74116.215 - 74537.330: 99.9114% ( 4) 00:08:56.249 74537.330 - 74958.445: 99.9436% ( 4) 00:08:56.249 74958.445 - 75379.560: 99.9758% ( 4) 00:08:56.249 75379.560 - 75800.675: 100.0000% ( 3) 00:08:56.249 00:08:56.249 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:56.249 ============================================================================== 00:08:56.249 Range in us Cumulative IO count 00:08:56.249 7001.035 - 7053.674: 0.0081% ( 1) 00:08:56.249 7106.313 - 7158.953: 0.0805% ( 9) 00:08:56.249 7158.953 - 7211.592: 0.2336% ( 19) 00:08:56.249 7211.592 - 7264.231: 0.5155% ( 35) 00:08:56.249 7264.231 - 7316.871: 0.9745% ( 57) 00:08:56.249 7316.871 - 7369.510: 1.7477% ( 96) 00:08:56.249 7369.510 - 7422.149: 2.7867% ( 129) 00:08:56.249 7422.149 - 7474.789: 4.0271% ( 154) 00:08:56.249 7474.789 - 7527.428: 5.5896% ( 194) 00:08:56.249 7527.428 - 7580.067: 7.6756% ( 259) 00:08:56.249 7580.067 - 7632.707: 9.6649% ( 247) 00:08:56.249 7632.707 - 7685.346: 11.8315% ( 269) 00:08:56.249 7685.346 - 7737.986: 14.2477% ( 300) 00:08:56.249 7737.986 - 7790.625: 16.9459% ( 335) 00:08:56.249 7790.625 - 7843.264: 20.1595% ( 399) 00:08:56.249 7843.264 - 7895.904: 23.3006% ( 390) 00:08:56.249 7895.904 - 7948.543: 26.6269% ( 413) 00:08:56.249 7948.543 - 8001.182: 30.2755% ( 453) 00:08:56.249 8001.182 - 8053.822: 34.1092% ( 476) 00:08:56.249 8053.822 - 8106.461: 37.8463% ( 464) 00:08:56.249 8106.461 - 8159.100: 41.4224% ( 444) 00:08:56.249 8159.100 - 8211.740: 44.6198% ( 397) 00:08:56.249 8211.740 - 8264.379: 47.6321% ( 374) 00:08:56.249 8264.379 - 8317.018: 50.6685% ( 377) 00:08:56.249 8317.018 - 8369.658: 53.8257% ( 392) 00:08:56.249 8369.658 - 8422.297: 56.4997% ( 332) 00:08:56.249 8422.297 - 8474.937: 59.1817% ( 333) 00:08:56.249 8474.937 - 8527.576: 60.9375% ( 218) 00:08:56.249 8527.576 - 8580.215: 62.1939% ( 156) 00:08:56.249 8580.215 - 8632.855: 63.4262% ( 153) 00:08:56.249 8632.855 - 8685.494: 64.3444% ( 114) 00:08:56.249 8685.494 - 8738.133: 65.4398% ( 136) 00:08:56.249 8738.133 - 8790.773: 66.6076% ( 145) 00:08:56.249 8790.773 - 8843.412: 67.5580% ( 118) 00:08:56.249 8843.412 - 8896.051: 68.6775% ( 139) 00:08:56.249 8896.051 - 8948.691: 69.8856% ( 150) 00:08:56.249 8948.691 - 9001.330: 70.9488% ( 132) 00:08:56.249 9001.330 - 9053.969: 71.9233% ( 121) 00:08:56.249 9053.969 - 9106.609: 72.8173% ( 111) 00:08:56.249 9106.609 - 9159.248: 74.0738% ( 156) 00:08:56.249 9159.248 - 9211.888: 75.4591% ( 172) 00:08:56.249 9211.888 - 9264.527: 77.0860% ( 202) 00:08:56.249 9264.527 - 9317.166: 78.6082% ( 189) 00:08:56.249 9317.166 - 9369.806: 80.2835% ( 208) 00:08:56.249 9369.806 - 9422.445: 81.4675% ( 147) 00:08:56.249 9422.445 - 9475.084: 82.7883% ( 164) 00:08:56.249 9475.084 - 9527.724: 83.8515% ( 132) 00:08:56.249 9527.724 - 9580.363: 85.2529% ( 174) 00:08:56.249 9580.363 - 9633.002: 86.2919% ( 129) 00:08:56.249 9633.002 - 9685.642: 87.2986% ( 125) 00:08:56.249 9685.642 - 9738.281: 88.5148% ( 151) 00:08:56.249 9738.281 - 9790.920: 89.4733% ( 119) 00:08:56.249 9790.920 - 9843.560: 90.2142% ( 92) 00:08:56.249 9843.560 - 9896.199: 90.8264% ( 76) 00:08:56.249 9896.199 - 9948.839: 91.2371% ( 51) 00:08:56.249 9948.839 - 10001.478: 91.6157% ( 47) 00:08:56.249 10001.478 - 10054.117: 92.1070% ( 61) 00:08:56.249 10054.117 - 10106.757: 92.3647% ( 32) 00:08:56.249 10106.757 - 10159.396: 92.7110% ( 43) 00:08:56.249 10159.396 - 10212.035: 92.9204% ( 26) 00:08:56.249 10212.035 - 10264.675: 93.1540% ( 29) 00:08:56.249 10264.675 - 10317.314: 93.3473% ( 24) 00:08:56.250 10317.314 - 10369.953: 93.4037% ( 7) 00:08:56.250 10369.953 - 10422.593: 93.5164% ( 14) 00:08:56.250 10422.593 - 10475.232: 93.6050% ( 11) 00:08:56.250 10475.232 - 10527.871: 93.7178% ( 14) 00:08:56.250 10527.871 - 10580.511: 93.7661% ( 6) 00:08:56.250 10580.511 - 10633.150: 93.8064% ( 5) 00:08:56.250 10633.150 - 10685.790: 93.8789% ( 9) 00:08:56.250 10685.790 - 10738.429: 93.9433% ( 8) 00:08:56.250 10738.429 - 10791.068: 94.0883% ( 18) 00:08:56.250 10791.068 - 10843.708: 94.2010% ( 14) 00:08:56.250 10843.708 - 10896.347: 94.2494% ( 6) 00:08:56.250 10896.347 - 10948.986: 94.2735% ( 3) 00:08:56.250 10948.986 - 11001.626: 94.2896% ( 2) 00:08:56.250 11001.626 - 11054.265: 94.3057% ( 2) 00:08:56.250 11054.265 - 11106.904: 94.3218% ( 2) 00:08:56.250 11106.904 - 11159.544: 94.3299% ( 1) 00:08:56.250 11791.216 - 11843.855: 94.3460% ( 2) 00:08:56.250 11843.855 - 11896.495: 94.3621% ( 2) 00:08:56.250 11896.495 - 11949.134: 94.3863% ( 3) 00:08:56.250 11949.134 - 12001.773: 94.4024% ( 2) 00:08:56.250 12001.773 - 12054.413: 94.4346% ( 4) 00:08:56.250 12054.413 - 12107.052: 94.4668% ( 4) 00:08:56.250 12107.052 - 12159.692: 94.4910% ( 3) 00:08:56.250 12159.692 - 12212.331: 94.5393% ( 6) 00:08:56.250 12212.331 - 12264.970: 94.6037% ( 8) 00:08:56.250 12264.970 - 12317.610: 94.7004% ( 12) 00:08:56.250 12317.610 - 12370.249: 94.7970% ( 12) 00:08:56.250 12370.249 - 12422.888: 94.8937% ( 12) 00:08:56.250 12422.888 - 12475.528: 94.9903% ( 12) 00:08:56.250 12475.528 - 12528.167: 95.1111% ( 15) 00:08:56.250 12528.167 - 12580.806: 95.1675% ( 7) 00:08:56.250 12580.806 - 12633.446: 95.2239% ( 7) 00:08:56.250 12633.446 - 12686.085: 95.2481% ( 3) 00:08:56.250 12686.085 - 12738.724: 95.2803% ( 4) 00:08:56.250 12738.724 - 12791.364: 95.3125% ( 4) 00:08:56.250 12791.364 - 12844.003: 95.3286% ( 2) 00:08:56.250 12844.003 - 12896.643: 95.3447% ( 2) 00:08:56.250 12949.282 - 13001.921: 95.3608% ( 2) 00:08:56.250 14002.069 - 14107.348: 95.3689% ( 1) 00:08:56.250 14317.905 - 14423.184: 95.4253% ( 7) 00:08:56.250 14423.184 - 14528.463: 95.5139% ( 11) 00:08:56.250 14528.463 - 14633.741: 95.6508% ( 17) 00:08:56.250 14633.741 - 14739.020: 95.7313% ( 10) 00:08:56.250 14739.020 - 14844.299: 95.7716% ( 5) 00:08:56.250 14844.299 - 14949.578: 95.8038% ( 4) 00:08:56.250 14949.578 - 15054.856: 95.8280% ( 3) 00:08:56.250 15054.856 - 15160.135: 95.8521% ( 3) 00:08:56.250 15160.135 - 15265.414: 95.8763% ( 3) 00:08:56.250 17686.824 - 17792.103: 95.9246% ( 6) 00:08:56.250 17792.103 - 17897.382: 96.0213% ( 12) 00:08:56.250 17897.382 - 18002.660: 96.1662% ( 18) 00:08:56.250 18002.660 - 18107.939: 96.1823% ( 2) 00:08:56.250 18107.939 - 18213.218: 96.2065% ( 3) 00:08:56.250 18213.218 - 18318.496: 96.2226% ( 2) 00:08:56.250 18318.496 - 18423.775: 96.2468% ( 3) 00:08:56.250 18423.775 - 18529.054: 96.2629% ( 2) 00:08:56.250 18529.054 - 18634.333: 96.2709% ( 1) 00:08:56.250 18634.333 - 18739.611: 96.2870% ( 2) 00:08:56.250 18739.611 - 18844.890: 96.3112% ( 3) 00:08:56.250 18844.890 - 18950.169: 96.3273% ( 2) 00:08:56.250 18950.169 - 19055.447: 96.3515% ( 3) 00:08:56.250 19055.447 - 19160.726: 96.3676% ( 2) 00:08:56.250 19160.726 - 19266.005: 96.3918% ( 3) 00:08:56.250 32004.729 - 32215.287: 96.4159% ( 3) 00:08:56.250 32215.287 - 32425.844: 96.4401% ( 3) 00:08:56.250 32425.844 - 32636.402: 96.4723% ( 4) 00:08:56.250 32636.402 - 32846.959: 96.5126% ( 5) 00:08:56.250 32846.959 - 33057.516: 96.5528% ( 5) 00:08:56.250 33057.516 - 33268.074: 96.5851% ( 4) 00:08:56.250 33268.074 - 33478.631: 96.6253% ( 5) 00:08:56.250 33478.631 - 33689.189: 96.6575% ( 4) 00:08:56.250 33689.189 - 33899.746: 96.6817% ( 3) 00:08:56.250 33899.746 - 34110.304: 96.6978% ( 2) 00:08:56.250 34110.304 - 34320.861: 96.7139% ( 2) 00:08:56.250 34320.861 - 34531.418: 96.7220% ( 1) 00:08:56.250 34531.418 - 34741.976: 96.7381% ( 2) 00:08:56.250 34741.976 - 34952.533: 96.7542% ( 2) 00:08:56.250 34952.533 - 35163.091: 96.7703% ( 2) 00:08:56.250 35163.091 - 35373.648: 96.7864% ( 2) 00:08:56.250 35373.648 - 35584.206: 96.8025% ( 2) 00:08:56.250 35584.206 - 35794.763: 96.8186% ( 2) 00:08:56.250 35794.763 - 36005.320: 96.8347% ( 2) 00:08:56.250 36005.320 - 36215.878: 96.8508% ( 2) 00:08:56.250 36215.878 - 36426.435: 96.8669% ( 2) 00:08:56.250 36426.435 - 36636.993: 96.8911% ( 3) 00:08:56.250 36636.993 - 36847.550: 96.9072% ( 2) 00:08:56.250 38742.567 - 38953.124: 96.9153% ( 1) 00:08:56.250 38953.124 - 39163.682: 96.9394% ( 3) 00:08:56.250 39163.682 - 39374.239: 96.9555% ( 2) 00:08:56.250 39374.239 - 39584.797: 96.9716% ( 2) 00:08:56.250 39584.797 - 39795.354: 96.9958% ( 3) 00:08:56.250 39795.354 - 40005.912: 97.0119% ( 2) 00:08:56.250 40005.912 - 40216.469: 97.0280% ( 2) 00:08:56.250 40216.469 - 40427.027: 97.0441% ( 2) 00:08:56.250 40427.027 - 40637.584: 97.0602% ( 2) 00:08:56.250 40637.584 - 40848.141: 97.0844% ( 3) 00:08:56.250 40848.141 - 41058.699: 97.1005% ( 2) 00:08:56.250 41058.699 - 41269.256: 97.1166% ( 2) 00:08:56.250 41269.256 - 41479.814: 97.1327% ( 2) 00:08:56.250 41479.814 - 41690.371: 97.1488% ( 2) 00:08:56.250 41690.371 - 41900.929: 97.1730% ( 3) 00:08:56.250 41900.929 - 42111.486: 97.1891% ( 2) 00:08:56.250 42111.486 - 42322.043: 97.2052% ( 2) 00:08:56.250 42322.043 - 42532.601: 97.2213% ( 2) 00:08:56.250 42532.601 - 42743.158: 97.2374% ( 2) 00:08:56.250 42743.158 - 42953.716: 97.2535% ( 2) 00:08:56.250 42953.716 - 43164.273: 97.2697% ( 2) 00:08:56.250 43164.273 - 43374.831: 97.2938% ( 3) 00:08:56.250 43374.831 - 43585.388: 97.3019% ( 1) 00:08:56.250 43585.388 - 43795.945: 97.3260% ( 3) 00:08:56.250 43795.945 - 44006.503: 97.3421% ( 2) 00:08:56.250 44006.503 - 44217.060: 97.3582% ( 2) 00:08:56.250 44217.060 - 44427.618: 97.3744% ( 2) 00:08:56.250 44427.618 - 44638.175: 97.3905% ( 2) 00:08:56.250 44638.175 - 44848.733: 97.4066% ( 2) 00:08:56.250 44848.733 - 45059.290: 97.4227% ( 2) 00:08:56.250 53060.472 - 53271.030: 97.4549% ( 4) 00:08:56.250 53271.030 - 53481.587: 97.5032% ( 6) 00:08:56.250 53481.587 - 53692.145: 97.5354% ( 4) 00:08:56.250 53692.145 - 53902.702: 97.6160% ( 10) 00:08:56.250 53902.702 - 54323.817: 97.6562% ( 5) 00:08:56.250 54323.817 - 54744.932: 97.6885% ( 4) 00:08:56.250 54744.932 - 55166.047: 97.7126% ( 3) 00:08:56.250 55166.047 - 55587.161: 97.7448% ( 4) 00:08:56.250 55587.161 - 56008.276: 97.7771% ( 4) 00:08:56.250 56008.276 - 56429.391: 97.8012% ( 3) 00:08:56.250 56429.391 - 56850.506: 97.8334% ( 4) 00:08:56.250 56850.506 - 57271.621: 97.8657% ( 4) 00:08:56.250 57271.621 - 57692.736: 97.8979% ( 4) 00:08:56.250 57692.736 - 58113.851: 97.9381% ( 5) 00:08:56.250 58113.851 - 58534.965: 97.9462% ( 1) 00:08:56.250 58534.965 - 58956.080: 97.9623% ( 2) 00:08:56.250 58956.080 - 59377.195: 97.9704% ( 1) 00:08:56.250 59377.195 - 59798.310: 97.9945% ( 3) 00:08:56.250 59798.310 - 60219.425: 98.0912% ( 12) 00:08:56.250 60219.425 - 60640.540: 98.1798% ( 11) 00:08:56.250 60640.540 - 61061.655: 98.2603% ( 10) 00:08:56.250 61061.655 - 61482.769: 98.3731% ( 14) 00:08:56.250 61482.769 - 61903.884: 98.4778% ( 13) 00:08:56.250 61903.884 - 62324.999: 98.6066% ( 16) 00:08:56.250 62324.999 - 62746.114: 98.7355% ( 16) 00:08:56.250 62746.114 - 63167.229: 98.8644% ( 16) 00:08:56.250 63167.229 - 63588.344: 99.0979% ( 29) 00:08:56.250 63588.344 - 64009.459: 99.1543% ( 7) 00:08:56.250 64009.459 - 64430.573: 99.2026% ( 6) 00:08:56.250 64430.573 - 64851.688: 99.2590% ( 7) 00:08:56.250 64851.688 - 65272.803: 99.2993% ( 5) 00:08:56.250 65272.803 - 65693.918: 99.3557% ( 7) 00:08:56.250 65693.918 - 66115.033: 99.4120% ( 7) 00:08:56.250 66115.033 - 66536.148: 99.4443% ( 4) 00:08:56.250 66536.148 - 66957.263: 99.4765% ( 4) 00:08:56.250 66957.263 - 67378.378: 99.4845% ( 1) 00:08:56.250 71589.526 - 72010.641: 99.5006% ( 2) 00:08:56.250 72010.641 - 72431.756: 99.5329% ( 4) 00:08:56.250 72431.756 - 72852.871: 99.5731% ( 5) 00:08:56.250 72852.871 - 73273.986: 99.6134% ( 5) 00:08:56.250 73273.986 - 73695.100: 99.6456% ( 4) 00:08:56.250 73695.100 - 74116.215: 99.6778% ( 4) 00:08:56.250 74116.215 - 74537.330: 99.7181% ( 5) 00:08:56.250 74537.330 - 74958.445: 99.7503% ( 4) 00:08:56.250 74958.445 - 75379.560: 99.7906% ( 5) 00:08:56.250 75379.560 - 75800.675: 99.8228% ( 4) 00:08:56.250 75800.675 - 76221.790: 99.8550% ( 4) 00:08:56.250 76221.790 - 76642.904: 99.9114% ( 7) 00:08:56.250 76642.904 - 77064.019: 99.9436% ( 4) 00:08:56.250 77064.019 - 77485.134: 99.9758% ( 4) 00:08:56.250 77485.134 - 77906.249: 100.0000% ( 3) 00:08:56.250 00:08:56.250 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:56.250 ============================================================================== 00:08:56.250 Range in us Cumulative IO count 00:08:56.250 7001.035 - 7053.674: 0.0403% ( 5) 00:08:56.250 7053.674 - 7106.313: 0.0483% ( 1) 00:08:56.250 7106.313 - 7158.953: 0.0725% ( 3) 00:08:56.250 7158.953 - 7211.592: 0.1530% ( 10) 00:08:56.250 7211.592 - 7264.231: 0.4108% ( 32) 00:08:56.250 7264.231 - 7316.871: 0.8054% ( 49) 00:08:56.250 7316.871 - 7369.510: 1.6833% ( 109) 00:08:56.250 7369.510 - 7422.149: 2.6176% ( 116) 00:08:56.250 7422.149 - 7474.789: 3.9224% ( 162) 00:08:56.250 7474.789 - 7527.428: 5.3157% ( 173) 00:08:56.250 7527.428 - 7580.067: 7.0796% ( 219) 00:08:56.250 7580.067 - 7632.707: 9.0931% ( 250) 00:08:56.250 7632.707 - 7685.346: 11.8476% ( 342) 00:08:56.250 7685.346 - 7737.986: 14.7471% ( 360) 00:08:56.250 7737.986 - 7790.625: 18.0815% ( 414) 00:08:56.251 7790.625 - 7843.264: 21.1904% ( 386) 00:08:56.251 7843.264 - 7895.904: 24.3557% ( 393) 00:08:56.251 7895.904 - 7948.543: 27.7787% ( 425) 00:08:56.251 7948.543 - 8001.182: 31.2339% ( 429) 00:08:56.251 8001.182 - 8053.822: 34.8180% ( 445) 00:08:56.251 8053.822 - 8106.461: 38.3457% ( 438) 00:08:56.251 8106.461 - 8159.100: 41.5029% ( 392) 00:08:56.251 8159.100 - 8211.740: 44.2252% ( 338) 00:08:56.251 8211.740 - 8264.379: 47.2052% ( 370) 00:08:56.251 8264.379 - 8317.018: 49.5812% ( 295) 00:08:56.251 8317.018 - 8369.658: 52.2310% ( 329) 00:08:56.251 8369.658 - 8422.297: 55.0983% ( 356) 00:08:56.251 8422.297 - 8474.937: 57.5789% ( 308) 00:08:56.251 8474.937 - 8527.576: 59.5280% ( 242) 00:08:56.251 8527.576 - 8580.215: 61.8154% ( 284) 00:08:56.251 8580.215 - 8632.855: 63.6356% ( 226) 00:08:56.251 8632.855 - 8685.494: 64.9404% ( 162) 00:08:56.251 8685.494 - 8738.133: 65.9633% ( 127) 00:08:56.251 8738.133 - 8790.773: 67.0747% ( 138) 00:08:56.251 8790.773 - 8843.412: 68.1459% ( 133) 00:08:56.251 8843.412 - 8896.051: 69.1124% ( 120) 00:08:56.251 8896.051 - 8948.691: 69.9340% ( 102) 00:08:56.251 8948.691 - 9001.330: 70.7152% ( 97) 00:08:56.251 9001.330 - 9053.969: 71.3918% ( 84) 00:08:56.251 9053.969 - 9106.609: 72.3985% ( 125) 00:08:56.251 9106.609 - 9159.248: 73.3972% ( 124) 00:08:56.251 9159.248 - 9211.888: 74.7262% ( 165) 00:08:56.251 9211.888 - 9264.527: 76.2887% ( 194) 00:08:56.251 9264.527 - 9317.166: 77.8028% ( 188) 00:08:56.251 9317.166 - 9369.806: 79.2123% ( 175) 00:08:56.251 9369.806 - 9422.445: 80.8231% ( 200) 00:08:56.251 9422.445 - 9475.084: 81.9829% ( 144) 00:08:56.251 9475.084 - 9527.724: 83.6179% ( 203) 00:08:56.251 9527.724 - 9580.363: 85.1160% ( 186) 00:08:56.251 9580.363 - 9633.002: 86.8315% ( 213) 00:08:56.251 9633.002 - 9685.642: 88.5954% ( 219) 00:08:56.251 9685.642 - 9738.281: 89.6343% ( 129) 00:08:56.251 9738.281 - 9790.920: 90.3753% ( 92) 00:08:56.251 9790.920 - 9843.560: 91.3579% ( 122) 00:08:56.251 9843.560 - 9896.199: 91.7767% ( 52) 00:08:56.251 9896.199 - 9948.839: 92.1956% ( 52) 00:08:56.251 9948.839 - 10001.478: 92.4774% ( 35) 00:08:56.251 10001.478 - 10054.117: 92.6224% ( 18) 00:08:56.251 10054.117 - 10106.757: 92.7432% ( 15) 00:08:56.251 10106.757 - 10159.396: 92.8479% ( 13) 00:08:56.251 10159.396 - 10212.035: 92.9043% ( 7) 00:08:56.251 10212.035 - 10264.675: 92.9526% ( 6) 00:08:56.251 10264.675 - 10317.314: 93.0976% ( 18) 00:08:56.251 10317.314 - 10369.953: 93.2265% ( 16) 00:08:56.251 10369.953 - 10422.593: 93.2748% ( 6) 00:08:56.251 10422.593 - 10475.232: 93.3151% ( 5) 00:08:56.251 10475.232 - 10527.871: 93.3553% ( 5) 00:08:56.251 10527.871 - 10580.511: 93.4278% ( 9) 00:08:56.251 10580.511 - 10633.150: 93.4923% ( 8) 00:08:56.251 10633.150 - 10685.790: 93.6211% ( 16) 00:08:56.251 10685.790 - 10738.429: 93.7903% ( 21) 00:08:56.251 10738.429 - 10791.068: 93.8547% ( 8) 00:08:56.251 10791.068 - 10843.708: 93.9191% ( 8) 00:08:56.251 10843.708 - 10896.347: 93.9836% ( 8) 00:08:56.251 10896.347 - 10948.986: 94.0722% ( 11) 00:08:56.251 10948.986 - 11001.626: 94.1285% ( 7) 00:08:56.251 11001.626 - 11054.265: 94.1930% ( 8) 00:08:56.251 11054.265 - 11106.904: 94.2091% ( 2) 00:08:56.251 11106.904 - 11159.544: 94.2252% ( 2) 00:08:56.251 11159.544 - 11212.183: 94.2413% ( 2) 00:08:56.251 11212.183 - 11264.822: 94.2574% ( 2) 00:08:56.251 11264.822 - 11317.462: 94.2816% ( 3) 00:08:56.251 11317.462 - 11370.101: 94.3057% ( 3) 00:08:56.251 11370.101 - 11422.741: 94.3218% ( 2) 00:08:56.251 11422.741 - 11475.380: 94.3380% ( 2) 00:08:56.251 11580.659 - 11633.298: 94.3702% ( 4) 00:08:56.251 11633.298 - 11685.937: 94.4104% ( 5) 00:08:56.251 11685.937 - 11738.577: 94.4507% ( 5) 00:08:56.251 11738.577 - 11791.216: 94.4910% ( 5) 00:08:56.251 11791.216 - 11843.855: 94.5312% ( 5) 00:08:56.251 11843.855 - 11896.495: 94.6521% ( 15) 00:08:56.251 11896.495 - 11949.134: 94.6843% ( 4) 00:08:56.251 11949.134 - 12001.773: 94.7245% ( 5) 00:08:56.251 12001.773 - 12054.413: 94.7487% ( 3) 00:08:56.251 12054.413 - 12107.052: 94.7729% ( 3) 00:08:56.251 12107.052 - 12159.692: 94.8212% ( 6) 00:08:56.251 12159.692 - 12212.331: 94.8454% ( 3) 00:08:56.251 12212.331 - 12264.970: 94.9098% ( 8) 00:08:56.251 12264.970 - 12317.610: 94.9420% ( 4) 00:08:56.251 12317.610 - 12370.249: 94.9662% ( 3) 00:08:56.251 12370.249 - 12422.888: 94.9903% ( 3) 00:08:56.251 12422.888 - 12475.528: 95.0145% ( 3) 00:08:56.251 12475.528 - 12528.167: 95.0306% ( 2) 00:08:56.251 12528.167 - 12580.806: 95.0548% ( 3) 00:08:56.251 12580.806 - 12633.446: 95.0950% ( 5) 00:08:56.251 12633.446 - 12686.085: 95.1434% ( 6) 00:08:56.251 12686.085 - 12738.724: 95.2078% ( 8) 00:08:56.251 12738.724 - 12791.364: 95.2722% ( 8) 00:08:56.251 12791.364 - 12844.003: 95.3044% ( 4) 00:08:56.251 12844.003 - 12896.643: 95.3206% ( 2) 00:08:56.251 12896.643 - 12949.282: 95.3367% ( 2) 00:08:56.251 12949.282 - 13001.921: 95.3608% ( 3) 00:08:56.251 13791.512 - 13896.790: 95.4091% ( 6) 00:08:56.251 13896.790 - 14002.069: 95.4655% ( 7) 00:08:56.251 14002.069 - 14107.348: 95.5219% ( 7) 00:08:56.251 14107.348 - 14212.627: 95.7555% ( 29) 00:08:56.251 14212.627 - 14317.905: 95.7796% ( 3) 00:08:56.251 14317.905 - 14423.184: 95.8119% ( 4) 00:08:56.251 14423.184 - 14528.463: 95.8360% ( 3) 00:08:56.251 14528.463 - 14633.741: 95.8521% ( 2) 00:08:56.251 14633.741 - 14739.020: 95.8763% ( 3) 00:08:56.251 17370.988 - 17476.267: 95.9085% ( 4) 00:08:56.251 17476.267 - 17581.545: 96.0052% ( 12) 00:08:56.251 17581.545 - 17686.824: 96.1099% ( 13) 00:08:56.251 17686.824 - 17792.103: 96.1662% ( 7) 00:08:56.251 17792.103 - 17897.382: 96.1823% ( 2) 00:08:56.251 17897.382 - 18002.660: 96.2065% ( 3) 00:08:56.251 18002.660 - 18107.939: 96.2226% ( 2) 00:08:56.251 18107.939 - 18213.218: 96.2387% ( 2) 00:08:56.251 18213.218 - 18318.496: 96.2629% ( 3) 00:08:56.251 18318.496 - 18423.775: 96.2790% ( 2) 00:08:56.251 18423.775 - 18529.054: 96.2951% ( 2) 00:08:56.251 18529.054 - 18634.333: 96.3112% ( 2) 00:08:56.251 18634.333 - 18739.611: 96.3273% ( 2) 00:08:56.251 18739.611 - 18844.890: 96.3434% ( 2) 00:08:56.251 18844.890 - 18950.169: 96.3676% ( 3) 00:08:56.251 18950.169 - 19055.447: 96.3918% ( 3) 00:08:56.251 28425.253 - 28635.810: 96.4159% ( 3) 00:08:56.251 28635.810 - 28846.368: 96.4401% ( 3) 00:08:56.251 28846.368 - 29056.925: 96.4965% ( 7) 00:08:56.251 29056.925 - 29267.483: 96.5448% ( 6) 00:08:56.251 29267.483 - 29478.040: 96.5851% ( 5) 00:08:56.251 29478.040 - 29688.598: 96.6253% ( 5) 00:08:56.251 29688.598 - 29899.155: 96.6414% ( 2) 00:08:56.251 29899.155 - 30109.712: 96.6575% ( 2) 00:08:56.251 30109.712 - 30320.270: 96.6736% ( 2) 00:08:56.251 30320.270 - 30530.827: 96.6898% ( 2) 00:08:56.251 30530.827 - 30741.385: 96.7059% ( 2) 00:08:56.251 30741.385 - 30951.942: 96.7220% ( 2) 00:08:56.251 30951.942 - 31162.500: 96.7381% ( 2) 00:08:56.251 31162.500 - 31373.057: 96.7542% ( 2) 00:08:56.251 31373.057 - 31583.614: 96.7703% ( 2) 00:08:56.251 31583.614 - 31794.172: 96.7864% ( 2) 00:08:56.251 31794.172 - 32004.729: 96.8025% ( 2) 00:08:56.251 32004.729 - 32215.287: 96.8186% ( 2) 00:08:56.251 32215.287 - 32425.844: 96.8347% ( 2) 00:08:56.251 32425.844 - 32636.402: 96.8508% ( 2) 00:08:56.251 32636.402 - 32846.959: 96.8669% ( 2) 00:08:56.251 32846.959 - 33057.516: 96.8911% ( 3) 00:08:56.251 33057.516 - 33268.074: 96.9072% ( 2) 00:08:56.251 42322.043 - 42532.601: 96.9233% ( 2) 00:08:56.251 42532.601 - 42743.158: 96.9475% ( 3) 00:08:56.251 42743.158 - 42953.716: 96.9636% ( 2) 00:08:56.251 42953.716 - 43164.273: 96.9958% ( 4) 00:08:56.251 43164.273 - 43374.831: 97.0039% ( 1) 00:08:56.251 43374.831 - 43585.388: 97.0280% ( 3) 00:08:56.251 43585.388 - 43795.945: 97.0522% ( 3) 00:08:56.251 43795.945 - 44006.503: 97.0764% ( 3) 00:08:56.251 44006.503 - 44217.060: 97.0844% ( 1) 00:08:56.251 44217.060 - 44427.618: 97.1086% ( 3) 00:08:56.251 44427.618 - 44638.175: 97.1327% ( 3) 00:08:56.251 44638.175 - 44848.733: 97.1569% ( 3) 00:08:56.251 44848.733 - 45059.290: 97.1730% ( 2) 00:08:56.251 45059.290 - 45269.847: 97.1972% ( 3) 00:08:56.251 45269.847 - 45480.405: 97.2133% ( 2) 00:08:56.251 45480.405 - 45690.962: 97.2374% ( 3) 00:08:56.251 45690.962 - 45901.520: 97.2616% ( 3) 00:08:56.251 45901.520 - 46112.077: 97.2777% ( 2) 00:08:56.251 46112.077 - 46322.635: 97.2858% ( 1) 00:08:56.251 46322.635 - 46533.192: 97.3180% ( 4) 00:08:56.251 46533.192 - 46743.749: 97.3421% ( 3) 00:08:56.251 46743.749 - 46954.307: 97.3582% ( 2) 00:08:56.251 46954.307 - 47164.864: 97.3663% ( 1) 00:08:56.251 47164.864 - 47375.422: 97.3824% ( 2) 00:08:56.251 47375.422 - 47585.979: 97.3985% ( 2) 00:08:56.251 47585.979 - 47796.537: 97.4066% ( 1) 00:08:56.251 47796.537 - 48007.094: 97.4146% ( 1) 00:08:56.251 48007.094 - 48217.651: 97.4227% ( 1) 00:08:56.251 49691.553 - 49902.111: 97.4468% ( 3) 00:08:56.251 49902.111 - 50112.668: 97.5113% ( 8) 00:08:56.251 50112.668 - 50323.226: 97.5354% ( 3) 00:08:56.251 50323.226 - 50533.783: 97.6160% ( 10) 00:08:56.251 50533.783 - 50744.341: 97.6482% ( 4) 00:08:56.251 50744.341 - 50954.898: 97.6643% ( 2) 00:08:56.251 50954.898 - 51165.455: 97.6804% ( 2) 00:08:56.251 51165.455 - 51376.013: 97.6965% ( 2) 00:08:56.251 51376.013 - 51586.570: 97.7126% ( 2) 00:08:56.251 51586.570 - 51797.128: 97.7287% ( 2) 00:08:56.251 51797.128 - 52007.685: 97.7448% ( 2) 00:08:56.251 52007.685 - 52218.243: 97.7529% ( 1) 00:08:56.251 52218.243 - 52428.800: 97.7771% ( 3) 00:08:56.251 52428.800 - 52639.357: 97.7851% ( 1) 00:08:56.251 52639.357 - 52849.915: 97.8012% ( 2) 00:08:56.252 52849.915 - 53060.472: 97.8173% ( 2) 00:08:56.252 53060.472 - 53271.030: 97.8334% ( 2) 00:08:56.252 53271.030 - 53481.587: 97.8495% ( 2) 00:08:56.252 53481.587 - 53692.145: 97.8657% ( 2) 00:08:56.252 53692.145 - 53902.702: 97.8818% ( 2) 00:08:56.252 53902.702 - 54323.817: 97.9059% ( 3) 00:08:56.252 54323.817 - 54744.932: 97.9381% ( 4) 00:08:56.252 58956.080 - 59377.195: 97.9704% ( 4) 00:08:56.252 59377.195 - 59798.310: 98.0106% ( 5) 00:08:56.252 59798.310 - 60219.425: 98.0992% ( 11) 00:08:56.252 60219.425 - 60640.540: 98.1717% ( 9) 00:08:56.252 60640.540 - 61061.655: 98.2523% ( 10) 00:08:56.252 61061.655 - 61482.769: 98.3650% ( 14) 00:08:56.252 61482.769 - 61903.884: 98.5422% ( 22) 00:08:56.252 61903.884 - 62324.999: 98.6630% ( 15) 00:08:56.252 62324.999 - 62746.114: 98.7516% ( 11) 00:08:56.252 62746.114 - 63167.229: 98.8563% ( 13) 00:08:56.252 63167.229 - 63588.344: 99.0979% ( 30) 00:08:56.252 63588.344 - 64009.459: 99.1463% ( 6) 00:08:56.252 64009.459 - 64430.573: 99.2026% ( 7) 00:08:56.252 64430.573 - 64851.688: 99.2510% ( 6) 00:08:56.252 64851.688 - 65272.803: 99.2993% ( 6) 00:08:56.252 65272.803 - 65693.918: 99.3396% ( 5) 00:08:56.252 65693.918 - 66115.033: 99.3959% ( 7) 00:08:56.252 66115.033 - 66536.148: 99.4362% ( 5) 00:08:56.252 66536.148 - 66957.263: 99.4604% ( 3) 00:08:56.252 66957.263 - 67378.378: 99.4845% ( 3) 00:08:56.252 73273.986 - 73695.100: 99.4926% ( 1) 00:08:56.252 73695.100 - 74116.215: 99.5087% ( 2) 00:08:56.252 74116.215 - 74537.330: 99.5248% ( 2) 00:08:56.252 74537.330 - 74958.445: 99.5570% ( 4) 00:08:56.252 74958.445 - 75379.560: 99.5973% ( 5) 00:08:56.252 75379.560 - 75800.675: 99.6295% ( 4) 00:08:56.252 75800.675 - 76221.790: 99.6698% ( 5) 00:08:56.252 76221.790 - 76642.904: 99.7181% ( 6) 00:08:56.252 76642.904 - 77064.019: 99.7584% ( 5) 00:08:56.252 77064.019 - 77485.134: 99.8309% ( 9) 00:08:56.252 77485.134 - 77906.249: 99.9114% ( 10) 00:08:56.252 77906.249 - 78327.364: 99.9436% ( 4) 00:08:56.252 78327.364 - 78748.479: 99.9597% ( 2) 00:08:56.252 78748.479 - 79169.594: 99.9839% ( 3) 00:08:56.252 79169.594 - 79590.708: 100.0000% ( 2) 00:08:56.252 00:08:56.252 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:56.252 ============================================================================== 00:08:56.252 Range in us Cumulative IO count 00:08:56.252 7053.674 - 7106.313: 0.0161% ( 2) 00:08:56.252 7106.313 - 7158.953: 0.0403% ( 3) 00:08:56.252 7158.953 - 7211.592: 0.1289% ( 11) 00:08:56.252 7211.592 - 7264.231: 0.2738% ( 18) 00:08:56.252 7264.231 - 7316.871: 0.5718% ( 37) 00:08:56.252 7316.871 - 7369.510: 1.0229% ( 56) 00:08:56.252 7369.510 - 7422.149: 1.9008% ( 109) 00:08:56.252 7422.149 - 7474.789: 3.2941% ( 173) 00:08:56.252 7474.789 - 7527.428: 4.8003% ( 187) 00:08:56.252 7527.428 - 7580.067: 7.0554% ( 280) 00:08:56.252 7580.067 - 7632.707: 9.4555% ( 298) 00:08:56.252 7632.707 - 7685.346: 12.3470% ( 359) 00:08:56.252 7685.346 - 7737.986: 15.5686% ( 400) 00:08:56.252 7737.986 - 7790.625: 18.6775% ( 386) 00:08:56.252 7790.625 - 7843.264: 22.1247% ( 428) 00:08:56.252 7843.264 - 7895.904: 25.1047% ( 370) 00:08:56.252 7895.904 - 7948.543: 28.2539% ( 391) 00:08:56.252 7948.543 - 8001.182: 31.3547% ( 385) 00:08:56.252 8001.182 - 8053.822: 34.6408% ( 408) 00:08:56.252 8053.822 - 8106.461: 38.0718% ( 426) 00:08:56.252 8106.461 - 8159.100: 41.3257% ( 404) 00:08:56.252 8159.100 - 8211.740: 44.3541% ( 376) 00:08:56.252 8211.740 - 8264.379: 47.2455% ( 359) 00:08:56.252 8264.379 - 8317.018: 50.0966% ( 354) 00:08:56.252 8317.018 - 8369.658: 52.6418% ( 316) 00:08:56.252 8369.658 - 8422.297: 55.4124% ( 344) 00:08:56.252 8422.297 - 8474.937: 57.6273% ( 275) 00:08:56.252 8474.937 - 8527.576: 59.6649% ( 253) 00:08:56.252 8527.576 - 8580.215: 61.6865% ( 251) 00:08:56.252 8580.215 - 8632.855: 63.4423% ( 218) 00:08:56.252 8632.855 - 8685.494: 64.8035% ( 169) 00:08:56.252 8685.494 - 8738.133: 66.0519% ( 155) 00:08:56.252 8738.133 - 8790.773: 67.4291% ( 171) 00:08:56.252 8790.773 - 8843.412: 68.1701% ( 92) 00:08:56.252 8843.412 - 8896.051: 68.8386% ( 83) 00:08:56.252 8896.051 - 8948.691: 69.5554% ( 89) 00:08:56.252 8948.691 - 9001.330: 70.3286% ( 96) 00:08:56.252 9001.330 - 9053.969: 71.3112% ( 122) 00:08:56.252 9053.969 - 9106.609: 72.3019% ( 123) 00:08:56.252 9106.609 - 9159.248: 73.5261% ( 152) 00:08:56.252 9159.248 - 9211.888: 75.1450% ( 201) 00:08:56.252 9211.888 - 9264.527: 76.4417% ( 161) 00:08:56.252 9264.527 - 9317.166: 77.9075% ( 182) 00:08:56.252 9317.166 - 9369.806: 79.8566% ( 242) 00:08:56.252 9369.806 - 9422.445: 81.2097% ( 168) 00:08:56.252 9422.445 - 9475.084: 82.7400% ( 190) 00:08:56.252 9475.084 - 9527.724: 84.3508% ( 200) 00:08:56.252 9527.724 - 9580.363: 85.6798% ( 165) 00:08:56.252 9580.363 - 9633.002: 87.0168% ( 166) 00:08:56.252 9633.002 - 9685.642: 88.3054% ( 160) 00:08:56.252 9685.642 - 9738.281: 89.2639% ( 119) 00:08:56.252 9738.281 - 9790.920: 90.1579% ( 111) 00:08:56.252 9790.920 - 9843.560: 90.8264% ( 83) 00:08:56.252 9843.560 - 9896.199: 91.2049% ( 47) 00:08:56.252 9896.199 - 9948.839: 91.5673% ( 45) 00:08:56.252 9948.839 - 10001.478: 91.9137% ( 43) 00:08:56.252 10001.478 - 10054.117: 92.1472% ( 29) 00:08:56.252 10054.117 - 10106.757: 92.3647% ( 27) 00:08:56.252 10106.757 - 10159.396: 92.6949% ( 41) 00:08:56.252 10159.396 - 10212.035: 92.8238% ( 16) 00:08:56.252 10212.035 - 10264.675: 92.9204% ( 12) 00:08:56.252 10264.675 - 10317.314: 93.0493% ( 16) 00:08:56.252 10317.314 - 10369.953: 93.2184% ( 21) 00:08:56.252 10369.953 - 10422.593: 93.5003% ( 35) 00:08:56.252 10422.593 - 10475.232: 93.5970% ( 12) 00:08:56.252 10475.232 - 10527.871: 93.6856% ( 11) 00:08:56.252 10527.871 - 10580.511: 93.7097% ( 3) 00:08:56.252 10580.511 - 10633.150: 93.7500% ( 5) 00:08:56.252 10633.150 - 10685.790: 93.7903% ( 5) 00:08:56.252 10685.790 - 10738.429: 93.7983% ( 1) 00:08:56.252 10738.429 - 10791.068: 93.8225% ( 3) 00:08:56.252 10791.068 - 10843.708: 93.8305% ( 1) 00:08:56.252 10843.708 - 10896.347: 93.8466% ( 2) 00:08:56.252 10896.347 - 10948.986: 93.8789% ( 4) 00:08:56.252 10948.986 - 11001.626: 93.8950% ( 2) 00:08:56.252 11001.626 - 11054.265: 93.9272% ( 4) 00:08:56.252 11054.265 - 11106.904: 93.9352% ( 1) 00:08:56.252 11106.904 - 11159.544: 93.9755% ( 5) 00:08:56.252 11159.544 - 11212.183: 94.0158% ( 5) 00:08:56.252 11212.183 - 11264.822: 94.1044% ( 11) 00:08:56.252 11264.822 - 11317.462: 94.1688% ( 8) 00:08:56.252 11317.462 - 11370.101: 94.1930% ( 3) 00:08:56.252 11370.101 - 11422.741: 94.2252% ( 4) 00:08:56.252 11422.741 - 11475.380: 94.2655% ( 5) 00:08:56.252 11475.380 - 11528.019: 94.2977% ( 4) 00:08:56.252 11528.019 - 11580.659: 94.4024% ( 13) 00:08:56.252 11580.659 - 11633.298: 94.5474% ( 18) 00:08:56.252 11633.298 - 11685.937: 94.6843% ( 17) 00:08:56.252 11685.937 - 11738.577: 94.7326% ( 6) 00:08:56.252 11738.577 - 11791.216: 94.7648% ( 4) 00:08:56.252 11791.216 - 11843.855: 94.7809% ( 2) 00:08:56.252 11843.855 - 11896.495: 94.7970% ( 2) 00:08:56.252 11896.495 - 11949.134: 94.8131% ( 2) 00:08:56.252 11949.134 - 12001.773: 94.8373% ( 3) 00:08:56.252 12001.773 - 12054.413: 94.8454% ( 1) 00:08:56.252 12475.528 - 12528.167: 94.8534% ( 1) 00:08:56.252 12580.806 - 12633.446: 94.8615% ( 1) 00:08:56.252 12633.446 - 12686.085: 94.8856% ( 3) 00:08:56.252 12686.085 - 12738.724: 94.9098% ( 3) 00:08:56.252 12738.724 - 12791.364: 94.9259% ( 2) 00:08:56.252 12791.364 - 12844.003: 94.9501% ( 3) 00:08:56.252 12844.003 - 12896.643: 94.9742% ( 3) 00:08:56.252 12896.643 - 12949.282: 95.0064% ( 4) 00:08:56.252 12949.282 - 13001.921: 95.0306% ( 3) 00:08:56.252 13001.921 - 13054.561: 95.0950% ( 8) 00:08:56.252 13054.561 - 13107.200: 95.1514% ( 7) 00:08:56.252 13107.200 - 13159.839: 95.1917% ( 5) 00:08:56.252 13159.839 - 13212.479: 95.2400% ( 6) 00:08:56.252 13212.479 - 13265.118: 95.2883% ( 6) 00:08:56.252 13265.118 - 13317.757: 95.3367% ( 6) 00:08:56.252 13317.757 - 13370.397: 95.3689% ( 4) 00:08:56.252 13475.676 - 13580.954: 95.3769% ( 1) 00:08:56.252 13686.233 - 13791.512: 95.4172% ( 5) 00:08:56.252 13791.512 - 13896.790: 95.4736% ( 7) 00:08:56.252 13896.790 - 14002.069: 95.6186% ( 18) 00:08:56.252 14002.069 - 14107.348: 95.7474% ( 16) 00:08:56.252 14107.348 - 14212.627: 95.7716% ( 3) 00:08:56.252 14212.627 - 14317.905: 95.7957% ( 3) 00:08:56.252 14317.905 - 14423.184: 95.8199% ( 3) 00:08:56.252 14423.184 - 14528.463: 95.8441% ( 3) 00:08:56.252 14528.463 - 14633.741: 95.8682% ( 3) 00:08:56.252 14633.741 - 14739.020: 95.8763% ( 1) 00:08:56.252 16949.873 - 17055.152: 95.9327% ( 7) 00:08:56.252 17055.152 - 17160.431: 96.0374% ( 13) 00:08:56.252 17160.431 - 17265.709: 96.1340% ( 12) 00:08:56.252 17265.709 - 17370.988: 96.1823% ( 6) 00:08:56.252 17370.988 - 17476.267: 96.1985% ( 2) 00:08:56.252 17476.267 - 17581.545: 96.2226% ( 3) 00:08:56.252 17581.545 - 17686.824: 96.2387% ( 2) 00:08:56.252 17686.824 - 17792.103: 96.2629% ( 3) 00:08:56.252 17792.103 - 17897.382: 96.2790% ( 2) 00:08:56.252 17897.382 - 18002.660: 96.3032% ( 3) 00:08:56.252 18002.660 - 18107.939: 96.3273% ( 3) 00:08:56.252 18107.939 - 18213.218: 96.3434% ( 2) 00:08:56.252 18213.218 - 18318.496: 96.3676% ( 3) 00:08:56.252 18318.496 - 18423.775: 96.3837% ( 2) 00:08:56.252 18423.775 - 18529.054: 96.3918% ( 1) 00:08:56.252 24529.941 - 24635.219: 96.3998% ( 1) 00:08:56.252 24635.219 - 24740.498: 96.4159% ( 2) 00:08:56.252 24740.498 - 24845.777: 96.4562% ( 5) 00:08:56.252 24845.777 - 24951.055: 96.4884% ( 4) 00:08:56.252 24951.055 - 25056.334: 96.5126% ( 3) 00:08:56.253 25056.334 - 25161.613: 96.5287% ( 2) 00:08:56.253 25161.613 - 25266.892: 96.5448% ( 2) 00:08:56.253 25266.892 - 25372.170: 96.5689% ( 3) 00:08:56.253 25372.170 - 25477.449: 96.5931% ( 3) 00:08:56.253 25477.449 - 25582.728: 96.6092% ( 2) 00:08:56.253 25582.728 - 25688.006: 96.6173% ( 1) 00:08:56.253 25688.006 - 25793.285: 96.6253% ( 1) 00:08:56.253 25793.285 - 25898.564: 96.6334% ( 1) 00:08:56.253 25898.564 - 26003.843: 96.6414% ( 1) 00:08:56.253 26003.843 - 26109.121: 96.6495% ( 1) 00:08:56.253 26109.121 - 26214.400: 96.6575% ( 1) 00:08:56.253 26214.400 - 26319.679: 96.6656% ( 1) 00:08:56.253 26319.679 - 26424.957: 96.6736% ( 1) 00:08:56.253 26424.957 - 26530.236: 96.6817% ( 1) 00:08:56.253 26530.236 - 26635.515: 96.6978% ( 2) 00:08:56.253 26635.515 - 26740.794: 96.7059% ( 1) 00:08:56.253 26740.794 - 26846.072: 96.7139% ( 1) 00:08:56.253 26846.072 - 26951.351: 96.7220% ( 1) 00:08:56.253 26951.351 - 27161.908: 96.7381% ( 2) 00:08:56.253 27161.908 - 27372.466: 96.7461% ( 1) 00:08:56.253 27372.466 - 27583.023: 96.7703% ( 3) 00:08:56.253 27583.023 - 27793.581: 96.7784% ( 1) 00:08:56.253 27793.581 - 28004.138: 96.7945% ( 2) 00:08:56.253 28004.138 - 28214.696: 96.8106% ( 2) 00:08:56.253 28214.696 - 28425.253: 96.8267% ( 2) 00:08:56.253 28425.253 - 28635.810: 96.8428% ( 2) 00:08:56.253 28635.810 - 28846.368: 96.8589% ( 2) 00:08:56.253 28846.368 - 29056.925: 96.8750% ( 2) 00:08:56.253 29056.925 - 29267.483: 96.8911% ( 2) 00:08:56.253 29267.483 - 29478.040: 96.9072% ( 2) 00:08:56.253 44217.060 - 44427.618: 96.9153% ( 1) 00:08:56.253 44427.618 - 44638.175: 96.9233% ( 1) 00:08:56.253 44848.733 - 45059.290: 96.9314% ( 1) 00:08:56.253 45059.290 - 45269.847: 96.9475% ( 2) 00:08:56.253 45269.847 - 45480.405: 96.9636% ( 2) 00:08:56.253 45480.405 - 45690.962: 96.9878% ( 3) 00:08:56.253 45690.962 - 45901.520: 97.0039% ( 2) 00:08:56.253 45901.520 - 46112.077: 97.0441% ( 5) 00:08:56.253 46112.077 - 46322.635: 97.0844% ( 5) 00:08:56.253 46322.635 - 46533.192: 97.1327% ( 6) 00:08:56.253 46533.192 - 46743.749: 97.1811% ( 6) 00:08:56.253 46743.749 - 46954.307: 97.2616% ( 10) 00:08:56.253 46954.307 - 47164.864: 97.3180% ( 7) 00:08:56.253 47164.864 - 47375.422: 97.3744% ( 7) 00:08:56.253 47375.422 - 47585.979: 97.4307% ( 7) 00:08:56.253 47585.979 - 47796.537: 97.4630% ( 4) 00:08:56.253 47796.537 - 48007.094: 97.5032% ( 5) 00:08:56.253 48007.094 - 48217.651: 97.5274% ( 3) 00:08:56.253 48217.651 - 48428.209: 97.5596% ( 4) 00:08:56.253 48428.209 - 48638.766: 97.5918% ( 4) 00:08:56.253 48638.766 - 48849.324: 97.6401% ( 6) 00:08:56.253 48849.324 - 49059.881: 97.6804% ( 5) 00:08:56.253 49059.881 - 49270.439: 97.7126% ( 4) 00:08:56.253 49270.439 - 49480.996: 97.7529% ( 5) 00:08:56.253 49480.996 - 49691.553: 97.7771% ( 3) 00:08:56.253 49691.553 - 49902.111: 97.8012% ( 3) 00:08:56.253 49902.111 - 50112.668: 97.8334% ( 4) 00:08:56.253 50112.668 - 50323.226: 97.8576% ( 3) 00:08:56.253 50323.226 - 50533.783: 97.8898% ( 4) 00:08:56.253 50533.783 - 50744.341: 97.9301% ( 5) 00:08:56.253 50744.341 - 50954.898: 97.9381% ( 1) 00:08:56.253 58113.851 - 58534.965: 97.9462% ( 1) 00:08:56.253 58534.965 - 58956.080: 97.9543% ( 1) 00:08:56.253 59377.195 - 59798.310: 97.9704% ( 2) 00:08:56.253 59798.310 - 60219.425: 98.0348% ( 8) 00:08:56.253 60219.425 - 60640.540: 98.1314% ( 12) 00:08:56.253 60640.540 - 61061.655: 98.2684% ( 17) 00:08:56.253 61061.655 - 61482.769: 98.3811% ( 14) 00:08:56.253 61482.769 - 61903.884: 98.5261% ( 18) 00:08:56.253 61903.884 - 62324.999: 98.7033% ( 22) 00:08:56.253 62324.999 - 62746.114: 98.8160% ( 14) 00:08:56.253 62746.114 - 63167.229: 98.9369% ( 15) 00:08:56.253 63167.229 - 63588.344: 99.1060% ( 21) 00:08:56.253 63588.344 - 64009.459: 99.1543% ( 6) 00:08:56.253 64009.459 - 64430.573: 99.2026% ( 6) 00:08:56.253 64430.573 - 64851.688: 99.2590% ( 7) 00:08:56.253 64851.688 - 65272.803: 99.3154% ( 7) 00:08:56.253 65272.803 - 65693.918: 99.3637% ( 6) 00:08:56.253 65693.918 - 66115.033: 99.4040% ( 5) 00:08:56.253 66115.033 - 66536.148: 99.4443% ( 5) 00:08:56.253 66536.148 - 66957.263: 99.4765% ( 4) 00:08:56.253 66957.263 - 67378.378: 99.4845% ( 1) 00:08:56.253 75800.675 - 76221.790: 99.5006% ( 2) 00:08:56.253 76221.790 - 76642.904: 99.5168% ( 2) 00:08:56.253 76642.904 - 77064.019: 99.5329% ( 2) 00:08:56.253 77064.019 - 77485.134: 99.5490% ( 2) 00:08:56.253 77485.134 - 77906.249: 99.5892% ( 5) 00:08:56.253 77906.249 - 78327.364: 99.6376% ( 6) 00:08:56.253 78327.364 - 78748.479: 99.6778% ( 5) 00:08:56.253 78748.479 - 79169.594: 99.7503% ( 9) 00:08:56.253 79169.594 - 79590.708: 99.8228% ( 9) 00:08:56.253 79590.708 - 80011.823: 99.8550% ( 4) 00:08:56.253 80011.823 - 80432.938: 99.8792% ( 3) 00:08:56.253 80432.938 - 80854.053: 99.9195% ( 5) 00:08:56.253 80854.053 - 81275.168: 99.9356% ( 2) 00:08:56.253 81275.168 - 81696.283: 99.9919% ( 7) 00:08:56.253 81696.283 - 82117.398: 100.0000% ( 1) 00:08:56.253 00:08:56.253 10:45:45 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:56.253 00:08:56.253 real 0m2.695s 00:08:56.253 user 0m2.284s 00:08:56.253 sys 0m0.298s 00:08:56.253 ************************************ 00:08:56.253 END TEST nvme_perf 00:08:56.253 ************************************ 00:08:56.253 10:45:45 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.253 10:45:45 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:56.253 10:45:45 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:56.253 10:45:45 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:56.253 10:45:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.253 10:45:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.253 ************************************ 00:08:56.253 START TEST nvme_hello_world 00:08:56.253 ************************************ 00:08:56.253 10:45:45 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:56.547 Initializing NVMe Controllers 00:08:56.547 Attached to 0000:00:10.0 00:08:56.547 Namespace ID: 1 size: 6GB 00:08:56.547 Attached to 0000:00:11.0 00:08:56.547 Namespace ID: 1 size: 5GB 00:08:56.547 Attached to 0000:00:13.0 00:08:56.547 Namespace ID: 1 size: 1GB 00:08:56.547 Attached to 0000:00:12.0 00:08:56.547 Namespace ID: 1 size: 4GB 00:08:56.547 Namespace ID: 2 size: 4GB 00:08:56.547 Namespace ID: 3 size: 4GB 00:08:56.547 Initialization complete. 00:08:56.547 INFO: using host memory buffer for IO 00:08:56.547 Hello world! 00:08:56.547 INFO: using host memory buffer for IO 00:08:56.547 Hello world! 00:08:56.547 INFO: using host memory buffer for IO 00:08:56.547 Hello world! 00:08:56.547 INFO: using host memory buffer for IO 00:08:56.547 Hello world! 00:08:56.547 INFO: using host memory buffer for IO 00:08:56.547 Hello world! 00:08:56.547 INFO: using host memory buffer for IO 00:08:56.547 Hello world! 00:08:56.547 ************************************ 00:08:56.547 END TEST nvme_hello_world 00:08:56.547 ************************************ 00:08:56.547 00:08:56.547 real 0m0.321s 00:08:56.547 user 0m0.116s 00:08:56.547 sys 0m0.152s 00:08:56.547 10:45:45 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.547 10:45:45 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:56.547 10:45:45 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:56.547 10:45:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.547 10:45:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.547 10:45:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.547 ************************************ 00:08:56.547 START TEST nvme_sgl 00:08:56.547 ************************************ 00:08:56.547 10:45:45 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:56.848 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:56.848 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:56.848 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:56.848 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:56.848 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:56.848 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:56.848 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:56.848 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:56.848 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:56.848 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:56.848 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:56.848 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:56.848 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:56.848 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:56.848 NVMe Readv/Writev Request test 00:08:56.848 Attached to 0000:00:10.0 00:08:56.848 Attached to 0000:00:11.0 00:08:56.848 Attached to 0000:00:13.0 00:08:56.848 Attached to 0000:00:12.0 00:08:56.848 0000:00:10.0: build_io_request_2 test passed 00:08:56.848 0000:00:10.0: build_io_request_4 test passed 00:08:56.848 0000:00:10.0: build_io_request_5 test passed 00:08:56.848 0000:00:10.0: build_io_request_6 test passed 00:08:56.848 0000:00:10.0: build_io_request_7 test passed 00:08:56.848 0000:00:10.0: build_io_request_10 test passed 00:08:56.848 0000:00:11.0: build_io_request_2 test passed 00:08:56.848 0000:00:11.0: build_io_request_4 test passed 00:08:56.848 0000:00:11.0: build_io_request_5 test passed 00:08:56.848 0000:00:11.0: build_io_request_6 test passed 00:08:56.848 0000:00:11.0: build_io_request_7 test passed 00:08:56.848 0000:00:11.0: build_io_request_10 test passed 00:08:56.848 Cleaning up... 00:08:56.848 00:08:56.848 real 0m0.369s 00:08:56.848 user 0m0.165s 00:08:56.848 sys 0m0.149s 00:08:56.848 10:45:46 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.848 10:45:46 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:56.848 ************************************ 00:08:56.848 END TEST nvme_sgl 00:08:56.848 ************************************ 00:08:57.107 10:45:46 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:57.107 10:45:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.107 10:45:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.107 10:45:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.107 ************************************ 00:08:57.107 START TEST nvme_e2edp 00:08:57.107 ************************************ 00:08:57.107 10:45:46 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:57.366 NVMe Write/Read with End-to-End data protection test 00:08:57.366 Attached to 0000:00:10.0 00:08:57.366 Attached to 0000:00:11.0 00:08:57.366 Attached to 0000:00:13.0 00:08:57.366 Attached to 0000:00:12.0 00:08:57.366 Cleaning up... 00:08:57.366 ************************************ 00:08:57.366 END TEST nvme_e2edp 00:08:57.366 ************************************ 00:08:57.366 00:08:57.366 real 0m0.283s 00:08:57.366 user 0m0.107s 00:08:57.366 sys 0m0.132s 00:08:57.366 10:45:46 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.366 10:45:46 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:57.366 10:45:46 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:57.366 10:45:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.366 10:45:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.366 10:45:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.366 ************************************ 00:08:57.366 START TEST nvme_reserve 00:08:57.366 ************************************ 00:08:57.366 10:45:46 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:57.627 ===================================================== 00:08:57.627 NVMe Controller at PCI bus 0, device 16, function 0 00:08:57.627 ===================================================== 00:08:57.627 Reservations: Not Supported 00:08:57.627 ===================================================== 00:08:57.627 NVMe Controller at PCI bus 0, device 17, function 0 00:08:57.627 ===================================================== 00:08:57.627 Reservations: Not Supported 00:08:57.627 ===================================================== 00:08:57.627 NVMe Controller at PCI bus 0, device 19, function 0 00:08:57.627 ===================================================== 00:08:57.627 Reservations: Not Supported 00:08:57.627 ===================================================== 00:08:57.627 NVMe Controller at PCI bus 0, device 18, function 0 00:08:57.627 ===================================================== 00:08:57.627 Reservations: Not Supported 00:08:57.627 Reservation test passed 00:08:57.627 00:08:57.627 real 0m0.264s 00:08:57.627 user 0m0.098s 00:08:57.627 sys 0m0.125s 00:08:57.627 ************************************ 00:08:57.627 END TEST nvme_reserve 00:08:57.627 ************************************ 00:08:57.627 10:45:46 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.627 10:45:46 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:57.627 10:45:46 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:57.627 10:45:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.627 10:45:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.627 10:45:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.627 ************************************ 00:08:57.627 START TEST nvme_err_injection 00:08:57.627 ************************************ 00:08:57.627 10:45:46 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:58.195 NVMe Error Injection test 00:08:58.195 Attached to 0000:00:10.0 00:08:58.195 Attached to 0000:00:11.0 00:08:58.195 Attached to 0000:00:13.0 00:08:58.195 Attached to 0000:00:12.0 00:08:58.195 0000:00:10.0: get features failed as expected 00:08:58.195 0000:00:11.0: get features failed as expected 00:08:58.195 0000:00:13.0: get features failed as expected 00:08:58.195 0000:00:12.0: get features failed as expected 00:08:58.195 0000:00:10.0: get features successfully as expected 00:08:58.195 0000:00:11.0: get features successfully as expected 00:08:58.195 0000:00:13.0: get features successfully as expected 00:08:58.195 0000:00:12.0: get features successfully as expected 00:08:58.195 0000:00:10.0: read failed as expected 00:08:58.195 0000:00:11.0: read failed as expected 00:08:58.195 0000:00:13.0: read failed as expected 00:08:58.195 0000:00:12.0: read failed as expected 00:08:58.195 0000:00:10.0: read successfully as expected 00:08:58.195 0000:00:11.0: read successfully as expected 00:08:58.195 0000:00:13.0: read successfully as expected 00:08:58.195 0000:00:12.0: read successfully as expected 00:08:58.195 Cleaning up... 00:08:58.195 00:08:58.195 real 0m0.315s 00:08:58.195 user 0m0.110s 00:08:58.195 sys 0m0.156s 00:08:58.195 ************************************ 00:08:58.195 END TEST nvme_err_injection 00:08:58.195 ************************************ 00:08:58.195 10:45:47 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.195 10:45:47 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:58.195 10:45:47 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:58.195 10:45:47 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:58.195 10:45:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.195 10:45:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.195 ************************************ 00:08:58.195 START TEST nvme_overhead 00:08:58.195 ************************************ 00:08:58.195 10:45:47 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:59.573 Initializing NVMe Controllers 00:08:59.573 Attached to 0000:00:10.0 00:08:59.573 Attached to 0000:00:11.0 00:08:59.573 Attached to 0000:00:13.0 00:08:59.573 Attached to 0000:00:12.0 00:08:59.573 Initialization complete. Launching workers. 00:08:59.573 submit (in ns) avg, min, max = 13466.2, 10744.6, 100764.7 00:08:59.573 complete (in ns) avg, min, max = 8405.6, 7793.6, 511484.3 00:08:59.573 00:08:59.573 Submit histogram 00:08:59.573 ================ 00:08:59.573 Range in us Cumulative Count 00:08:59.573 10.744 - 10.795: 0.0291% ( 2) 00:08:59.573 11.052 - 11.104: 0.0437% ( 1) 00:08:59.573 11.206 - 11.258: 0.0582% ( 1) 00:08:59.573 11.309 - 11.361: 0.0728% ( 1) 00:08:59.573 11.823 - 11.875: 0.0874% ( 1) 00:08:59.573 12.132 - 12.183: 0.1019% ( 1) 00:08:59.573 12.183 - 12.235: 0.1165% ( 1) 00:08:59.573 12.235 - 12.286: 0.1747% ( 4) 00:08:59.573 12.286 - 12.337: 0.3932% ( 15) 00:08:59.573 12.337 - 12.389: 1.0339% ( 44) 00:08:59.573 12.389 - 12.440: 1.6310% ( 41) 00:08:59.573 12.440 - 12.492: 2.2717% ( 44) 00:08:59.573 12.492 - 12.543: 2.9562% ( 47) 00:08:59.573 12.543 - 12.594: 3.8153% ( 59) 00:08:59.573 12.594 - 12.646: 4.5580% ( 51) 00:08:59.573 12.646 - 12.697: 5.4900% ( 64) 00:08:59.573 12.697 - 12.749: 6.8443% ( 93) 00:08:59.573 12.749 - 12.800: 9.5092% ( 183) 00:08:59.573 12.800 - 12.851: 13.5576% ( 278) 00:08:59.573 12.851 - 12.903: 18.5816% ( 345) 00:08:59.573 12.903 - 12.954: 25.2075% ( 455) 00:08:59.573 12.954 - 13.006: 32.7072% ( 515) 00:08:59.573 13.006 - 13.057: 40.2213% ( 516) 00:08:59.573 13.057 - 13.108: 47.7355% ( 516) 00:08:59.573 13.108 - 13.160: 54.3760% ( 456) 00:08:59.573 13.160 - 13.263: 66.6667% ( 844) 00:08:59.573 13.263 - 13.365: 76.9914% ( 709) 00:08:59.573 13.365 - 13.468: 83.6755% ( 459) 00:08:59.573 13.468 - 13.571: 88.2336% ( 313) 00:08:59.573 13.571 - 13.674: 90.8257% ( 178) 00:08:59.573 13.674 - 13.777: 92.5295% ( 117) 00:08:59.573 13.777 - 13.880: 93.4615% ( 64) 00:08:59.573 13.880 - 13.982: 94.1168% ( 45) 00:08:59.573 13.982 - 14.085: 94.3935% ( 19) 00:08:59.573 14.085 - 14.188: 94.6119% ( 15) 00:08:59.573 14.188 - 14.291: 94.6993% ( 6) 00:08:59.573 14.291 - 14.394: 94.7430% ( 3) 00:08:59.573 14.394 - 14.496: 94.8158% ( 5) 00:08:59.573 14.496 - 14.599: 94.8449% ( 2) 00:08:59.573 14.702 - 14.805: 94.8740% ( 2) 00:08:59.573 15.216 - 15.319: 94.8886% ( 1) 00:08:59.573 15.319 - 15.422: 94.9177% ( 2) 00:08:59.573 15.524 - 15.627: 94.9323% ( 1) 00:08:59.573 15.833 - 15.936: 94.9468% ( 1) 00:08:59.573 15.936 - 16.039: 94.9760% ( 2) 00:08:59.573 16.039 - 16.141: 95.0197% ( 3) 00:08:59.573 16.141 - 16.244: 95.0633% ( 3) 00:08:59.573 16.244 - 16.347: 95.1216% ( 4) 00:08:59.573 16.347 - 16.450: 95.2090% ( 6) 00:08:59.573 16.450 - 16.553: 95.3837% ( 12) 00:08:59.573 16.553 - 16.655: 95.5002% ( 8) 00:08:59.573 16.655 - 16.758: 95.6604% ( 11) 00:08:59.573 16.758 - 16.861: 95.9517% ( 20) 00:08:59.573 16.861 - 16.964: 96.1264% ( 12) 00:08:59.573 16.964 - 17.067: 96.3740% ( 17) 00:08:59.573 17.067 - 17.169: 96.4905% ( 8) 00:08:59.573 17.169 - 17.272: 96.6070% ( 8) 00:08:59.573 17.272 - 17.375: 96.6943% ( 6) 00:08:59.573 17.375 - 17.478: 96.8108% ( 8) 00:08:59.573 17.478 - 17.581: 96.9128% ( 7) 00:08:59.573 17.581 - 17.684: 96.9273% ( 1) 00:08:59.573 17.684 - 17.786: 96.9419% ( 1) 00:08:59.573 17.786 - 17.889: 97.0147% ( 5) 00:08:59.573 17.889 - 17.992: 97.0875% ( 5) 00:08:59.573 17.992 - 18.095: 97.1458% ( 4) 00:08:59.573 18.095 - 18.198: 97.2186% ( 5) 00:08:59.573 18.198 - 18.300: 97.3205% ( 7) 00:08:59.573 18.300 - 18.403: 97.4370% ( 8) 00:08:59.573 18.403 - 18.506: 97.5972% ( 11) 00:08:59.573 18.506 - 18.609: 97.7137% ( 8) 00:08:59.573 18.609 - 18.712: 97.7865% ( 5) 00:08:59.573 18.712 - 18.814: 97.8593% ( 5) 00:08:59.573 18.814 - 18.917: 97.9758% ( 8) 00:08:59.573 18.917 - 19.020: 98.0195% ( 3) 00:08:59.573 19.020 - 19.123: 98.1215% ( 7) 00:08:59.573 19.123 - 19.226: 98.2234% ( 7) 00:08:59.574 19.226 - 19.329: 98.3690% ( 10) 00:08:59.574 19.329 - 19.431: 98.5292% ( 11) 00:08:59.574 19.431 - 19.534: 98.6166% ( 6) 00:08:59.574 19.534 - 19.637: 98.7039% ( 6) 00:08:59.574 19.637 - 19.740: 98.7622% ( 4) 00:08:59.574 19.740 - 19.843: 98.8641% ( 7) 00:08:59.574 19.843 - 19.945: 98.8933% ( 2) 00:08:59.574 19.945 - 20.048: 98.9661% ( 5) 00:08:59.574 20.048 - 20.151: 99.0243% ( 4) 00:08:59.574 20.151 - 20.254: 99.1117% ( 6) 00:08:59.574 20.357 - 20.459: 99.1408% ( 2) 00:08:59.574 20.459 - 20.562: 99.1554% ( 1) 00:08:59.574 20.665 - 20.768: 99.1845% ( 2) 00:08:59.574 20.768 - 20.871: 99.1991% ( 1) 00:08:59.574 20.871 - 20.973: 99.2136% ( 1) 00:08:59.574 20.973 - 21.076: 99.2428% ( 2) 00:08:59.574 21.076 - 21.179: 99.2573% ( 1) 00:08:59.574 21.282 - 21.385: 99.2719% ( 1) 00:08:59.574 21.590 - 21.693: 99.2864% ( 1) 00:08:59.574 22.002 - 22.104: 99.3156% ( 2) 00:08:59.574 22.207 - 22.310: 99.3301% ( 1) 00:08:59.574 22.310 - 22.413: 99.3447% ( 1) 00:08:59.574 22.721 - 22.824: 99.3738% ( 2) 00:08:59.574 23.030 - 23.133: 99.4029% ( 2) 00:08:59.574 23.133 - 23.235: 99.4175% ( 1) 00:08:59.574 23.235 - 23.338: 99.4466% ( 2) 00:08:59.574 23.441 - 23.544: 99.4758% ( 2) 00:08:59.574 23.955 - 24.058: 99.4903% ( 1) 00:08:59.574 24.058 - 24.161: 99.5194% ( 2) 00:08:59.574 24.263 - 24.366: 99.5631% ( 3) 00:08:59.574 24.366 - 24.469: 99.5777% ( 1) 00:08:59.574 24.675 - 24.778: 99.5923% ( 1) 00:08:59.574 24.983 - 25.086: 99.6068% ( 1) 00:08:59.574 25.394 - 25.497: 99.6505% ( 3) 00:08:59.574 25.600 - 25.703: 99.6651% ( 1) 00:08:59.574 25.703 - 25.806: 99.6796% ( 1) 00:08:59.574 25.806 - 25.908: 99.7088% ( 2) 00:08:59.574 26.217 - 26.320: 99.7233% ( 1) 00:08:59.574 26.320 - 26.525: 99.7379% ( 1) 00:08:59.574 27.553 - 27.759: 99.7670% ( 2) 00:08:59.574 27.965 - 28.170: 99.7961% ( 2) 00:08:59.574 28.170 - 28.376: 99.8107% ( 1) 00:08:59.574 30.227 - 30.432: 99.8253% ( 1) 00:08:59.574 30.638 - 30.843: 99.8398% ( 1) 00:08:59.574 31.666 - 31.871: 99.8544% ( 1) 00:08:59.574 32.077 - 32.283: 99.8689% ( 1) 00:08:59.574 35.161 - 35.367: 99.8835% ( 1) 00:08:59.574 35.984 - 36.190: 99.8981% ( 1) 00:08:59.574 36.395 - 36.601: 99.9126% ( 1) 00:08:59.574 37.218 - 37.423: 99.9272% ( 1) 00:08:59.574 38.657 - 38.863: 99.9418% ( 1) 00:08:59.574 44.003 - 44.209: 99.9563% ( 1) 00:08:59.574 47.910 - 48.116: 99.9709% ( 1) 00:08:59.574 91.708 - 92.119: 99.9854% ( 1) 00:08:59.574 100.755 - 101.166: 100.0000% ( 1) 00:08:59.574 00:08:59.574 Complete histogram 00:08:59.574 ================== 00:08:59.574 Range in us Cumulative Count 00:08:59.574 7.762 - 7.814: 0.1019% ( 7) 00:08:59.574 7.814 - 7.865: 1.2669% ( 80) 00:08:59.574 7.865 - 7.916: 7.7909% ( 448) 00:08:59.574 7.916 - 7.968: 24.2901% ( 1133) 00:08:59.574 7.968 - 8.019: 46.9055% ( 1553) 00:08:59.574 8.019 - 8.071: 64.2784% ( 1193) 00:08:59.574 8.071 - 8.122: 73.5256% ( 635) 00:08:59.574 8.122 - 8.173: 78.4476% ( 338) 00:08:59.574 8.173 - 8.225: 82.0882% ( 250) 00:08:59.574 8.225 - 8.276: 84.1998% ( 145) 00:08:59.574 8.276 - 8.328: 85.3502% ( 79) 00:08:59.574 8.328 - 8.379: 86.2677% ( 63) 00:08:59.574 8.379 - 8.431: 87.1560% ( 61) 00:08:59.574 8.431 - 8.482: 87.5783% ( 29) 00:08:59.574 8.482 - 8.533: 87.8404% ( 18) 00:08:59.574 8.533 - 8.585: 88.3792% ( 37) 00:08:59.574 8.585 - 8.636: 89.0782% ( 48) 00:08:59.574 8.636 - 8.688: 89.5588% ( 33) 00:08:59.574 8.688 - 8.739: 89.8791% ( 22) 00:08:59.574 8.739 - 8.790: 91.0150% ( 78) 00:08:59.574 8.790 - 8.842: 92.0635% ( 72) 00:08:59.574 8.842 - 8.893: 93.0683% ( 69) 00:08:59.574 8.893 - 8.945: 93.9566% ( 61) 00:08:59.574 8.945 - 8.996: 94.9323% ( 67) 00:08:59.574 8.996 - 9.047: 95.6604% ( 50) 00:08:59.574 9.047 - 9.099: 96.3012% ( 44) 00:08:59.574 9.099 - 9.150: 96.7526% ( 31) 00:08:59.574 9.150 - 9.202: 97.0147% ( 18) 00:08:59.574 9.202 - 9.253: 97.2914% ( 19) 00:08:59.574 9.253 - 9.304: 97.4953% ( 14) 00:08:59.574 9.304 - 9.356: 97.6263% ( 9) 00:08:59.574 9.356 - 9.407: 97.7283% ( 7) 00:08:59.574 9.407 - 9.459: 97.8156% ( 6) 00:08:59.574 9.459 - 9.510: 97.9321% ( 8) 00:08:59.574 9.510 - 9.561: 97.9467% ( 1) 00:08:59.574 9.561 - 9.613: 97.9758% ( 2) 00:08:59.574 9.664 - 9.716: 97.9904% ( 1) 00:08:59.574 9.870 - 9.921: 98.0050% ( 1) 00:08:59.574 10.230 - 10.281: 98.0195% ( 1) 00:08:59.574 10.281 - 10.333: 98.0341% ( 1) 00:08:59.574 10.384 - 10.435: 98.0486% ( 1) 00:08:59.574 10.949 - 11.001: 98.0632% ( 1) 00:08:59.574 11.052 - 11.104: 98.0778% ( 1) 00:08:59.574 11.361 - 11.412: 98.0923% ( 1) 00:08:59.574 11.412 - 11.463: 98.1069% ( 1) 00:08:59.574 11.669 - 11.720: 98.1215% ( 1) 00:08:59.574 11.926 - 11.978: 98.1360% ( 1) 00:08:59.574 12.029 - 12.080: 98.1506% ( 1) 00:08:59.574 13.108 - 13.160: 98.1651% ( 1) 00:08:59.574 13.160 - 13.263: 98.1797% ( 1) 00:08:59.574 13.263 - 13.365: 98.2816% ( 7) 00:08:59.574 13.365 - 13.468: 98.4127% ( 9) 00:08:59.574 13.468 - 13.571: 98.5001% ( 6) 00:08:59.574 13.571 - 13.674: 98.5874% ( 6) 00:08:59.574 13.674 - 13.777: 98.7039% ( 8) 00:08:59.574 13.777 - 13.880: 98.7913% ( 6) 00:08:59.574 13.880 - 13.982: 98.9078% ( 8) 00:08:59.574 13.982 - 14.085: 98.9952% ( 6) 00:08:59.574 14.085 - 14.188: 99.0389% ( 3) 00:08:59.574 14.188 - 14.291: 99.0680% ( 2) 00:08:59.574 14.291 - 14.394: 99.0826% ( 1) 00:08:59.574 14.394 - 14.496: 99.1263% ( 3) 00:08:59.574 14.496 - 14.599: 99.1845% ( 4) 00:08:59.574 14.599 - 14.702: 99.2282% ( 3) 00:08:59.574 14.702 - 14.805: 99.3010% ( 5) 00:08:59.574 14.908 - 15.010: 99.3447% ( 3) 00:08:59.574 15.113 - 15.216: 99.3593% ( 1) 00:08:59.574 15.216 - 15.319: 99.3884% ( 2) 00:08:59.574 15.422 - 15.524: 99.4175% ( 2) 00:08:59.574 15.627 - 15.730: 99.4321% ( 1) 00:08:59.574 15.833 - 15.936: 99.4466% ( 1) 00:08:59.574 15.936 - 16.039: 99.4612% ( 1) 00:08:59.574 16.039 - 16.141: 99.4758% ( 1) 00:08:59.574 16.141 - 16.244: 99.4903% ( 1) 00:08:59.574 17.375 - 17.478: 99.5049% ( 1) 00:08:59.574 18.198 - 18.300: 99.5194% ( 1) 00:08:59.574 19.534 - 19.637: 99.5340% ( 1) 00:08:59.574 20.048 - 20.151: 99.5631% ( 2) 00:08:59.574 20.254 - 20.357: 99.5923% ( 2) 00:08:59.574 20.357 - 20.459: 99.6359% ( 3) 00:08:59.574 20.459 - 20.562: 99.6651% ( 2) 00:08:59.574 20.768 - 20.871: 99.6796% ( 1) 00:08:59.574 20.973 - 21.076: 99.6942% ( 1) 00:08:59.574 22.002 - 22.104: 99.7088% ( 1) 00:08:59.574 22.104 - 22.207: 99.7379% ( 2) 00:08:59.574 23.338 - 23.441: 99.7524% ( 1) 00:08:59.574 24.469 - 24.572: 99.7670% ( 1) 00:08:59.574 25.189 - 25.292: 99.7816% ( 1) 00:08:59.574 26.937 - 27.142: 99.7961% ( 1) 00:08:59.574 28.582 - 28.787: 99.8107% ( 1) 00:08:59.574 28.993 - 29.198: 99.8253% ( 1) 00:08:59.574 31.871 - 32.077: 99.8398% ( 1) 00:08:59.574 34.133 - 34.339: 99.8544% ( 1) 00:08:59.574 35.161 - 35.367: 99.8689% ( 1) 00:08:59.574 35.984 - 36.190: 99.8981% ( 2) 00:08:59.574 50.583 - 50.789: 99.9126% ( 1) 00:08:59.574 52.639 - 53.051: 99.9272% ( 1) 00:08:59.574 61.687 - 62.098: 99.9418% ( 1) 00:08:59.574 65.799 - 66.210: 99.9563% ( 1) 00:08:59.574 106.101 - 106.924: 99.9709% ( 1) 00:08:59.574 111.859 - 112.681: 99.9854% ( 1) 00:08:59.574 509.944 - 513.234: 100.0000% ( 1) 00:08:59.574 00:08:59.574 00:08:59.574 real 0m1.277s 00:08:59.574 user 0m1.091s 00:08:59.574 sys 0m0.138s 00:08:59.574 ************************************ 00:08:59.574 END TEST nvme_overhead 00:08:59.574 ************************************ 00:08:59.574 10:45:48 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.574 10:45:48 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:59.574 10:45:48 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:59.574 10:45:48 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:59.574 10:45:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.574 10:45:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.574 ************************************ 00:08:59.574 START TEST nvme_arbitration 00:08:59.574 ************************************ 00:08:59.574 10:45:48 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:02.871 Initializing NVMe Controllers 00:09:02.871 Attached to 0000:00:10.0 00:09:02.871 Attached to 0000:00:11.0 00:09:02.871 Attached to 0000:00:13.0 00:09:02.871 Attached to 0000:00:12.0 00:09:02.871 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:02.871 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:02.871 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:02.871 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:02.871 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:02.871 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:02.871 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:02.871 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:02.871 Initialization complete. Launching workers. 00:09:02.871 Starting thread on core 1 with urgent priority queue 00:09:02.871 Starting thread on core 2 with urgent priority queue 00:09:02.871 Starting thread on core 3 with urgent priority queue 00:09:02.871 Starting thread on core 0 with urgent priority queue 00:09:02.871 QEMU NVMe Ctrl (12340 ) core 0: 405.33 IO/s 246.71 secs/100000 ios 00:09:02.871 QEMU NVMe Ctrl (12342 ) core 0: 405.33 IO/s 246.71 secs/100000 ios 00:09:02.871 QEMU NVMe Ctrl (12341 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:09:02.871 QEMU NVMe Ctrl (12342 ) core 1: 426.67 IO/s 234.38 secs/100000 ios 00:09:02.871 QEMU NVMe Ctrl (12343 ) core 2: 789.33 IO/s 126.69 secs/100000 ios 00:09:02.871 QEMU NVMe Ctrl (12342 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:09:02.871 ======================================================== 00:09:02.871 00:09:02.871 00:09:02.871 real 0m3.443s 00:09:02.871 user 0m9.420s 00:09:02.871 sys 0m0.148s 00:09:02.871 10:45:52 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.871 ************************************ 00:09:02.871 END TEST nvme_arbitration 00:09:02.871 ************************************ 00:09:02.871 10:45:52 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:02.871 10:45:52 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:02.871 10:45:52 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.871 10:45:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.871 10:45:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.871 ************************************ 00:09:02.871 START TEST nvme_single_aen 00:09:02.871 ************************************ 00:09:02.871 10:45:52 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:03.130 Asynchronous Event Request test 00:09:03.130 Attached to 0000:00:10.0 00:09:03.130 Attached to 0000:00:11.0 00:09:03.130 Attached to 0000:00:13.0 00:09:03.130 Attached to 0000:00:12.0 00:09:03.130 Reset controller to setup AER completions for this process 00:09:03.130 Registering asynchronous event callbacks... 00:09:03.130 Getting orig temperature thresholds of all controllers 00:09:03.130 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:03.130 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:03.130 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:03.130 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:03.130 Setting all controllers temperature threshold low to trigger AER 00:09:03.130 Waiting for all controllers temperature threshold to be set lower 00:09:03.130 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:03.130 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:03.130 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:03.130 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:03.130 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:03.130 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:03.130 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:03.130 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:03.130 Waiting for all controllers to trigger AER and reset threshold 00:09:03.130 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.130 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.130 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.130 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.130 Cleaning up... 00:09:03.130 00:09:03.130 real 0m0.285s 00:09:03.130 user 0m0.103s 00:09:03.130 sys 0m0.139s 00:09:03.131 10:45:52 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.131 ************************************ 00:09:03.131 END TEST nvme_single_aen 00:09:03.131 ************************************ 00:09:03.131 10:45:52 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:03.391 10:45:52 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:03.391 10:45:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.391 10:45:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.391 10:45:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.391 ************************************ 00:09:03.391 START TEST nvme_doorbell_aers 00:09:03.391 ************************************ 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:03.391 10:45:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:03.650 [2024-11-20 10:45:52.854626] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:13.627 Executing: test_write_invalid_db 00:09:13.627 Waiting for AER completion... 00:09:13.627 Failure: test_write_invalid_db 00:09:13.627 00:09:13.627 Executing: test_invalid_db_write_overflow_sq 00:09:13.627 Waiting for AER completion... 00:09:13.627 Failure: test_invalid_db_write_overflow_sq 00:09:13.627 00:09:13.627 Executing: test_invalid_db_write_overflow_cq 00:09:13.627 Waiting for AER completion... 00:09:13.627 Failure: test_invalid_db_write_overflow_cq 00:09:13.627 00:09:13.627 10:46:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:13.627 10:46:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.889 [2024-11-20 10:46:02.926033] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:23.868 Executing: test_write_invalid_db 00:09:23.868 Waiting for AER completion... 00:09:23.868 Failure: test_write_invalid_db 00:09:23.868 00:09:23.868 Executing: test_invalid_db_write_overflow_sq 00:09:23.868 Waiting for AER completion... 00:09:23.868 Failure: test_invalid_db_write_overflow_sq 00:09:23.868 00:09:23.868 Executing: test_invalid_db_write_overflow_cq 00:09:23.868 Waiting for AER completion... 00:09:23.868 Failure: test_invalid_db_write_overflow_cq 00:09:23.868 00:09:23.868 10:46:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:23.868 10:46:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:23.868 [2024-11-20 10:46:12.961132] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:33.853 Executing: test_write_invalid_db 00:09:33.853 Waiting for AER completion... 00:09:33.853 Failure: test_write_invalid_db 00:09:33.853 00:09:33.853 Executing: test_invalid_db_write_overflow_sq 00:09:33.853 Waiting for AER completion... 00:09:33.853 Failure: test_invalid_db_write_overflow_sq 00:09:33.853 00:09:33.853 Executing: test_invalid_db_write_overflow_cq 00:09:33.853 Waiting for AER completion... 00:09:33.853 Failure: test_invalid_db_write_overflow_cq 00:09:33.853 00:09:33.853 10:46:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:33.853 10:46:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:33.853 [2024-11-20 10:46:23.011197] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:43.834 Executing: test_write_invalid_db 00:09:43.834 Waiting for AER completion... 00:09:43.834 Failure: test_write_invalid_db 00:09:43.834 00:09:43.834 Executing: test_invalid_db_write_overflow_sq 00:09:43.834 Waiting for AER completion... 00:09:43.834 Failure: test_invalid_db_write_overflow_sq 00:09:43.834 00:09:43.834 Executing: test_invalid_db_write_overflow_cq 00:09:43.834 Waiting for AER completion... 00:09:43.834 Failure: test_invalid_db_write_overflow_cq 00:09:43.834 00:09:43.834 00:09:43.834 real 0m40.325s 00:09:43.834 user 0m28.375s 00:09:43.834 sys 0m11.605s 00:09:43.834 10:46:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.834 10:46:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:43.834 ************************************ 00:09:43.834 END TEST nvme_doorbell_aers 00:09:43.834 ************************************ 00:09:43.834 10:46:32 nvme -- nvme/nvme.sh@97 -- # uname 00:09:43.834 10:46:32 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:43.834 10:46:32 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:43.834 10:46:32 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:43.834 10:46:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.834 10:46:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.834 ************************************ 00:09:43.834 START TEST nvme_multi_aen 00:09:43.834 ************************************ 00:09:43.834 10:46:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:44.093 [2024-11-20 10:46:33.118650] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.118743] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.118765] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.120461] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.120497] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.120511] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.121989] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.122141] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.122207] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.123702] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.123862] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 [2024-11-20 10:46:33.123970] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64349) is not found. Dropping the request. 00:09:44.093 Child process pid: 64865 00:09:44.352 [Child] Asynchronous Event Request test 00:09:44.352 [Child] Attached to 0000:00:10.0 00:09:44.352 [Child] Attached to 0000:00:11.0 00:09:44.352 [Child] Attached to 0000:00:13.0 00:09:44.352 [Child] Attached to 0000:00:12.0 00:09:44.352 [Child] Registering asynchronous event callbacks... 00:09:44.352 [Child] Getting orig temperature thresholds of all controllers 00:09:44.352 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.352 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.352 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.352 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.352 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:44.352 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.352 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.352 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.352 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.352 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.352 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.352 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.352 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.352 [Child] Cleaning up... 00:09:44.352 Asynchronous Event Request test 00:09:44.352 Attached to 0000:00:10.0 00:09:44.352 Attached to 0000:00:11.0 00:09:44.352 Attached to 0000:00:13.0 00:09:44.352 Attached to 0000:00:12.0 00:09:44.352 Reset controller to setup AER completions for this process 00:09:44.353 Registering asynchronous event callbacks... 00:09:44.353 Getting orig temperature thresholds of all controllers 00:09:44.353 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.353 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.353 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.353 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.353 Setting all controllers temperature threshold low to trigger AER 00:09:44.353 Waiting for all controllers temperature threshold to be set lower 00:09:44.353 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.353 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:44.353 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.353 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:44.353 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.353 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:44.353 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.353 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:44.353 Waiting for all controllers to trigger AER and reset threshold 00:09:44.353 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.353 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.353 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.353 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.353 Cleaning up... 00:09:44.353 00:09:44.353 real 0m0.636s 00:09:44.353 user 0m0.229s 00:09:44.353 sys 0m0.290s 00:09:44.353 10:46:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.353 10:46:33 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 ************************************ 00:09:44.353 END TEST nvme_multi_aen 00:09:44.353 ************************************ 00:09:44.353 10:46:33 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:44.353 10:46:33 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.353 10:46:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.353 10:46:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.353 ************************************ 00:09:44.353 START TEST nvme_startup 00:09:44.353 ************************************ 00:09:44.353 10:46:33 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:44.612 Initializing NVMe Controllers 00:09:44.612 Attached to 0000:00:10.0 00:09:44.612 Attached to 0000:00:11.0 00:09:44.612 Attached to 0000:00:13.0 00:09:44.612 Attached to 0000:00:12.0 00:09:44.612 Initialization complete. 00:09:44.612 Time used:192911.969 (us). 00:09:44.612 ************************************ 00:09:44.612 END TEST nvme_startup 00:09:44.612 ************************************ 00:09:44.612 00:09:44.612 real 0m0.290s 00:09:44.612 user 0m0.108s 00:09:44.612 sys 0m0.139s 00:09:44.612 10:46:33 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.612 10:46:33 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:44.870 10:46:33 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:44.870 10:46:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.870 10:46:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.870 10:46:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.870 ************************************ 00:09:44.870 START TEST nvme_multi_secondary 00:09:44.870 ************************************ 00:09:44.870 10:46:33 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:44.870 10:46:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64921 00:09:44.870 10:46:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:44.871 10:46:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64922 00:09:44.871 10:46:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:44.871 10:46:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:48.162 Initializing NVMe Controllers 00:09:48.162 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:48.162 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:48.162 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:48.162 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:48.162 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:48.162 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:48.162 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:48.162 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:48.162 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:48.162 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:48.162 Initialization complete. Launching workers. 00:09:48.162 ======================================================== 00:09:48.162 Latency(us) 00:09:48.162 Device Information : IOPS MiB/s Average min max 00:09:48.162 PCIE (0000:00:10.0) NSID 1 from core 1: 5188.77 20.27 3081.40 924.08 7514.58 00:09:48.162 PCIE (0000:00:11.0) NSID 1 from core 1: 5188.77 20.27 3083.13 960.74 8062.47 00:09:48.162 PCIE (0000:00:13.0) NSID 1 from core 1: 5188.77 20.27 3083.38 955.67 8156.35 00:09:48.162 PCIE (0000:00:12.0) NSID 1 from core 1: 5188.77 20.27 3083.70 936.17 7942.77 00:09:48.162 PCIE (0000:00:12.0) NSID 2 from core 1: 5188.77 20.27 3083.87 945.72 8394.99 00:09:48.162 PCIE (0000:00:12.0) NSID 3 from core 1: 5194.11 20.29 3081.05 955.02 7164.86 00:09:48.162 ======================================================== 00:09:48.162 Total : 31137.98 121.63 3082.76 924.08 8394.99 00:09:48.162 00:09:48.421 Initializing NVMe Controllers 00:09:48.421 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:48.421 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:48.421 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:48.421 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:48.421 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:48.421 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:48.421 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:48.421 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:48.421 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:48.421 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:48.421 Initialization complete. Launching workers. 00:09:48.421 ======================================================== 00:09:48.421 Latency(us) 00:09:48.421 Device Information : IOPS MiB/s Average min max 00:09:48.421 PCIE (0000:00:10.0) NSID 1 from core 2: 3539.58 13.83 4518.91 1111.34 11988.23 00:09:48.421 PCIE (0000:00:11.0) NSID 1 from core 2: 3539.58 13.83 4519.51 1108.03 10665.04 00:09:48.421 PCIE (0000:00:13.0) NSID 1 from core 2: 3539.58 13.83 4519.82 1222.00 11522.58 00:09:48.421 PCIE (0000:00:12.0) NSID 1 from core 2: 3539.58 13.83 4519.72 1059.75 11154.95 00:09:48.421 PCIE (0000:00:12.0) NSID 2 from core 2: 3539.58 13.83 4519.70 1074.21 11509.53 00:09:48.421 PCIE (0000:00:12.0) NSID 3 from core 2: 3539.58 13.83 4513.71 933.40 11570.92 00:09:48.421 ======================================================== 00:09:48.421 Total : 21237.47 82.96 4518.56 933.40 11988.23 00:09:48.421 00:09:48.421 10:46:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64921 00:09:50.385 Initializing NVMe Controllers 00:09:50.385 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:50.385 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:50.385 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:50.385 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:50.385 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:50.385 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:50.385 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:50.385 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:50.385 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:50.385 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:50.385 Initialization complete. Launching workers. 00:09:50.385 ======================================================== 00:09:50.385 Latency(us) 00:09:50.385 Device Information : IOPS MiB/s Average min max 00:09:50.385 PCIE (0000:00:10.0) NSID 1 from core 0: 8600.64 33.60 1858.95 910.18 9037.64 00:09:50.385 PCIE (0000:00:11.0) NSID 1 from core 0: 8600.64 33.60 1859.89 915.69 8870.56 00:09:50.385 PCIE (0000:00:13.0) NSID 1 from core 0: 8600.64 33.60 1859.86 829.13 8943.97 00:09:50.385 PCIE (0000:00:12.0) NSID 1 from core 0: 8600.64 33.60 1859.81 769.72 8873.27 00:09:50.385 PCIE (0000:00:12.0) NSID 2 from core 0: 8600.64 33.60 1859.78 711.37 8729.12 00:09:50.385 PCIE (0000:00:12.0) NSID 3 from core 0: 8600.64 33.60 1859.75 671.07 8831.83 00:09:50.385 ======================================================== 00:09:50.385 Total : 51603.82 201.58 1859.67 671.07 9037.64 00:09:50.385 00:09:50.385 10:46:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64922 00:09:50.385 10:46:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64991 00:09:50.385 10:46:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:50.385 10:46:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64992 00:09:50.385 10:46:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:50.385 10:46:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:53.685 Initializing NVMe Controllers 00:09:53.685 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.685 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.685 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.685 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.685 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:53.685 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:53.685 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:53.685 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:53.685 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:53.685 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:53.685 Initialization complete. Launching workers. 00:09:53.685 ======================================================== 00:09:53.685 Latency(us) 00:09:53.685 Device Information : IOPS MiB/s Average min max 00:09:53.685 PCIE (0000:00:10.0) NSID 1 from core 1: 4970.51 19.42 3216.72 1045.82 7479.15 00:09:53.685 PCIE (0000:00:11.0) NSID 1 from core 1: 4970.51 19.42 3219.20 1064.27 7782.28 00:09:53.685 PCIE (0000:00:13.0) NSID 1 from core 1: 4970.51 19.42 3219.41 1074.83 7262.19 00:09:53.685 PCIE (0000:00:12.0) NSID 1 from core 1: 4970.51 19.42 3219.58 1073.60 6634.28 00:09:53.685 PCIE (0000:00:12.0) NSID 2 from core 1: 4970.51 19.42 3219.91 1061.95 6858.71 00:09:53.685 PCIE (0000:00:12.0) NSID 3 from core 1: 4970.51 19.42 3220.07 1074.35 6658.89 00:09:53.685 ======================================================== 00:09:53.685 Total : 29823.06 116.50 3219.15 1045.82 7782.28 00:09:53.685 00:09:53.944 Initializing NVMe Controllers 00:09:53.944 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.944 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.944 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.944 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.944 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:53.944 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:53.944 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:53.944 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:53.944 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:53.944 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:53.944 Initialization complete. Launching workers. 00:09:53.944 ======================================================== 00:09:53.944 Latency(us) 00:09:53.944 Device Information : IOPS MiB/s Average min max 00:09:53.944 PCIE (0000:00:10.0) NSID 1 from core 0: 4992.76 19.50 3202.24 994.22 9207.25 00:09:53.944 PCIE (0000:00:11.0) NSID 1 from core 0: 4992.76 19.50 3204.23 1027.77 8963.08 00:09:53.944 PCIE (0000:00:13.0) NSID 1 from core 0: 4992.76 19.50 3204.33 1048.63 8510.05 00:09:53.944 PCIE (0000:00:12.0) NSID 1 from core 0: 4992.76 19.50 3204.29 1047.48 7444.65 00:09:53.944 PCIE (0000:00:12.0) NSID 2 from core 0: 4992.76 19.50 3204.26 1021.46 7257.97 00:09:53.944 PCIE (0000:00:12.0) NSID 3 from core 0: 4992.76 19.50 3204.23 1014.31 8959.04 00:09:53.944 ======================================================== 00:09:53.944 Total : 29956.59 117.02 3203.93 994.22 9207.25 00:09:53.944 00:09:55.845 Initializing NVMe Controllers 00:09:55.845 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:55.845 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:55.845 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:55.845 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:55.845 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:55.845 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:55.845 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:55.845 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:55.845 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:55.845 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:55.845 Initialization complete. Launching workers. 00:09:55.845 ======================================================== 00:09:55.845 Latency(us) 00:09:55.845 Device Information : IOPS MiB/s Average min max 00:09:55.845 PCIE (0000:00:10.0) NSID 1 from core 2: 3339.27 13.04 4790.39 1030.92 12355.89 00:09:55.845 PCIE (0000:00:11.0) NSID 1 from core 2: 3339.27 13.04 4791.41 1051.86 13150.88 00:09:55.845 PCIE (0000:00:13.0) NSID 1 from core 2: 3342.47 13.06 4786.76 1058.06 13024.24 00:09:55.845 PCIE (0000:00:12.0) NSID 1 from core 2: 3342.47 13.06 4786.68 1039.34 13276.22 00:09:55.845 PCIE (0000:00:12.0) NSID 2 from core 2: 3342.47 13.06 4786.62 1026.47 13115.44 00:09:55.845 PCIE (0000:00:12.0) NSID 3 from core 2: 3342.47 13.06 4786.57 1035.54 12335.19 00:09:55.845 ======================================================== 00:09:55.845 Total : 20048.43 78.31 4788.07 1026.47 13276.22 00:09:55.845 00:09:55.845 ************************************ 00:09:55.845 END TEST nvme_multi_secondary 00:09:55.845 ************************************ 00:09:55.845 10:46:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64991 00:09:55.845 10:46:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64992 00:09:55.845 00:09:55.845 real 0m10.837s 00:09:55.845 user 0m18.581s 00:09:55.845 sys 0m1.016s 00:09:55.845 10:46:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.845 10:46:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:55.845 10:46:44 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:55.845 10:46:44 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:55.845 10:46:44 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63929 ]] 00:09:55.845 10:46:44 nvme -- common/autotest_common.sh@1094 -- # kill 63929 00:09:55.845 10:46:44 nvme -- common/autotest_common.sh@1095 -- # wait 63929 00:09:55.845 [2024-11-20 10:46:44.832735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.833181] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.833274] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.833339] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.838068] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.838145] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.838177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.838211] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.842968] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.843043] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.843074] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.843108] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.846840] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.846912] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.846933] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 [2024-11-20 10:46:44.846956] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64860) is not found. Dropping the request. 00:09:55.845 10:46:45 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:55.845 10:46:45 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:55.845 10:46:45 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:55.845 10:46:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.845 10:46:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.845 10:46:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.845 ************************************ 00:09:55.845 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:55.845 ************************************ 00:09:55.845 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:56.105 * Looking for test storage... 00:09:56.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.105 --rc genhtml_branch_coverage=1 00:09:56.105 --rc genhtml_function_coverage=1 00:09:56.105 --rc genhtml_legend=1 00:09:56.105 --rc geninfo_all_blocks=1 00:09:56.105 --rc geninfo_unexecuted_blocks=1 00:09:56.105 00:09:56.105 ' 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.105 --rc genhtml_branch_coverage=1 00:09:56.105 --rc genhtml_function_coverage=1 00:09:56.105 --rc genhtml_legend=1 00:09:56.105 --rc geninfo_all_blocks=1 00:09:56.105 --rc geninfo_unexecuted_blocks=1 00:09:56.105 00:09:56.105 ' 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.105 --rc genhtml_branch_coverage=1 00:09:56.105 --rc genhtml_function_coverage=1 00:09:56.105 --rc genhtml_legend=1 00:09:56.105 --rc geninfo_all_blocks=1 00:09:56.105 --rc geninfo_unexecuted_blocks=1 00:09:56.105 00:09:56.105 ' 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.105 --rc genhtml_branch_coverage=1 00:09:56.105 --rc genhtml_function_coverage=1 00:09:56.105 --rc genhtml_legend=1 00:09:56.105 --rc geninfo_all_blocks=1 00:09:56.105 --rc geninfo_unexecuted_blocks=1 00:09:56.105 00:09:56.105 ' 00:09:56.105 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:56.106 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65159 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65159 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65159 ']' 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.364 10:46:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:56.364 [2024-11-20 10:46:45.484635] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:09:56.364 [2024-11-20 10:46:45.484763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65159 ] 00:09:56.621 [2024-11-20 10:46:45.681793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.621 [2024-11-20 10:46:45.793708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.621 [2024-11-20 10:46:45.793883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.621 [2024-11-20 10:46:45.794029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.621 [2024-11-20 10:46:45.794052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.555 nvme0n1 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_DcsbV.txt 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.555 true 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732099606 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65182 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:57.555 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:57.556 10:46:46 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:00.086 [2024-11-20 10:46:48.778014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:00.086 [2024-11-20 10:46:48.778440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:00.086 [2024-11-20 10:46:48.778569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:00.086 [2024-11-20 10:46:48.778739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:00.086 [2024-11-20 10:46:48.780761] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:00.086 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65182 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65182 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65182 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_DcsbV.txt 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:00.086 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_DcsbV.txt 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65159 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65159 ']' 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65159 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65159 00:10:00.087 killing process with pid 65159 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65159' 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65159 00:10:00.087 10:46:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65159 00:10:02.620 10:46:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:02.620 10:46:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:02.620 00:10:02.620 real 0m6.312s 00:10:02.620 user 0m21.996s 00:10:02.620 sys 0m0.782s 00:10:02.620 10:46:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.620 ************************************ 00:10:02.620 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:02.620 ************************************ 00:10:02.620 10:46:51 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:02.620 10:46:51 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:02.620 10:46:51 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:02.620 10:46:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.620 10:46:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.620 10:46:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.620 ************************************ 00:10:02.620 START TEST nvme_fio 00:10:02.620 ************************************ 00:10:02.620 10:46:51 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:02.620 10:46:51 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:02.620 10:46:51 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:02.620 10:46:51 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:02.620 10:46:51 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:02.620 10:46:51 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:02.620 10:46:51 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:02.620 10:46:51 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:02.620 10:46:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:02.901 10:46:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:02.901 10:46:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:02.901 10:46:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:03.159 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:03.159 fio-3.35 00:10:03.159 Starting 1 thread 00:10:07.348 00:10:07.348 test: (groupid=0, jobs=1): err= 0: pid=65342: Wed Nov 20 10:46:55 2024 00:10:07.348 read: IOPS=24.1k, BW=94.3MiB/s (98.9MB/s)(189MiB/2001msec) 00:10:07.348 slat (nsec): min=3714, max=49013, avg=4141.76, stdev=806.05 00:10:07.348 clat (usec): min=204, max=10974, avg=2645.89, stdev=282.01 00:10:07.348 lat (usec): min=208, max=11023, avg=2650.03, stdev=282.38 00:10:07.348 clat percentiles (usec): 00:10:07.348 | 1.00th=[ 2089], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2540], 00:10:07.348 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2638], 00:10:07.348 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2835], 00:10:07.348 | 99.00th=[ 3458], 99.50th=[ 4178], 99.90th=[ 5932], 99.95th=[ 7767], 00:10:07.348 | 99.99th=[10683] 00:10:07.348 bw ( KiB/s): min=92624, max=97320, per=99.12%, avg=95704.00, stdev=2668.44, samples=3 00:10:07.348 iops : min=23156, max=24330, avg=23926.00, stdev=667.11, samples=3 00:10:07.348 write: IOPS=24.0k, BW=93.7MiB/s (98.2MB/s)(187MiB/2001msec); 0 zone resets 00:10:07.348 slat (nsec): min=3789, max=30641, avg=4453.38, stdev=818.49 00:10:07.348 clat (usec): min=182, max=10774, avg=2651.12, stdev=289.21 00:10:07.348 lat (usec): min=187, max=10794, avg=2655.57, stdev=289.53 00:10:07.348 clat percentiles (usec): 00:10:07.348 | 1.00th=[ 2073], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2540], 00:10:07.348 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:10:07.348 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2835], 00:10:07.348 | 99.00th=[ 3523], 99.50th=[ 4293], 99.90th=[ 6128], 99.95th=[ 8160], 00:10:07.348 | 99.99th=[10290] 00:10:07.348 bw ( KiB/s): min=92408, max=98264, per=99.86%, avg=95773.33, stdev=3024.40, samples=3 00:10:07.348 iops : min=23102, max=24566, avg=23943.33, stdev=756.10, samples=3 00:10:07.348 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:07.348 lat (msec) : 2=0.71%, 4=98.62%, 10=0.62%, 20=0.02% 00:10:07.348 cpu : usr=99.45%, sys=0.15%, ctx=3, majf=0, minf=607 00:10:07.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:07.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.348 issued rwts: total=48299,47976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.348 00:10:07.348 Run status group 0 (all jobs): 00:10:07.348 READ: bw=94.3MiB/s (98.9MB/s), 94.3MiB/s-94.3MiB/s (98.9MB/s-98.9MB/s), io=189MiB (198MB), run=2001-2001msec 00:10:07.348 WRITE: bw=93.7MiB/s (98.2MB/s), 93.7MiB/s-93.7MiB/s (98.2MB/s-98.2MB/s), io=187MiB (197MB), run=2001-2001msec 00:10:07.348 ----------------------------------------------------- 00:10:07.348 Suppressions used: 00:10:07.348 count bytes template 00:10:07.348 1 32 /usr/src/fio/parse.c 00:10:07.348 1 8 libtcmalloc_minimal.so 00:10:07.348 ----------------------------------------------------- 00:10:07.348 00:10:07.348 10:46:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:07.348 10:46:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:07.348 10:46:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:07.348 10:46:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:07.348 10:46:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:07.348 10:46:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:07.607 10:46:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:07.607 10:46:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:07.607 10:46:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:07.866 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:07.866 fio-3.35 00:10:07.866 Starting 1 thread 00:10:12.055 00:10:12.055 test: (groupid=0, jobs=1): err= 0: pid=65404: Wed Nov 20 10:47:00 2024 00:10:12.055 read: IOPS=24.3k, BW=94.8MiB/s (99.4MB/s)(190MiB/2001msec) 00:10:12.055 slat (nsec): min=3708, max=58685, avg=4090.92, stdev=928.56 00:10:12.055 clat (usec): min=187, max=10211, avg=2629.45, stdev=251.66 00:10:12.055 lat (usec): min=191, max=10269, avg=2633.54, stdev=252.04 00:10:12.055 clat percentiles (usec): 00:10:12.055 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:10:12.055 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2606], 60.00th=[ 2638], 00:10:12.055 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2737], 95.00th=[ 2835], 00:10:12.055 | 99.00th=[ 3294], 99.50th=[ 3752], 99.90th=[ 5276], 99.95th=[ 7373], 00:10:12.055 | 99.99th=[10028] 00:10:12.055 bw ( KiB/s): min=94696, max=97576, per=99.35%, avg=96464.00, stdev=1548.02, samples=3 00:10:12.055 iops : min=23674, max=24394, avg=24116.00, stdev=387.00, samples=3 00:10:12.055 write: IOPS=24.1k, BW=94.2MiB/s (98.8MB/s)(189MiB/2001msec); 0 zone resets 00:10:12.055 slat (nsec): min=3814, max=33210, avg=4373.21, stdev=897.15 00:10:12.055 clat (usec): min=171, max=10033, avg=2635.28, stdev=258.21 00:10:12.055 lat (usec): min=175, max=10054, avg=2639.66, stdev=258.55 00:10:12.055 clat percentiles (usec): 00:10:12.055 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2540], 00:10:12.055 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2638], 00:10:12.055 | 70.00th=[ 2671], 80.00th=[ 2704], 90.00th=[ 2769], 95.00th=[ 2835], 00:10:12.055 | 99.00th=[ 3359], 99.50th=[ 3851], 99.90th=[ 5538], 99.95th=[ 7635], 00:10:12.055 | 99.99th=[ 9765] 00:10:12.055 bw ( KiB/s): min=94640, max=99048, per=100.00%, avg=96642.67, stdev=2231.42, samples=3 00:10:12.055 iops : min=23660, max=24762, avg=24160.67, stdev=557.85, samples=3 00:10:12.055 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:12.055 lat (msec) : 2=0.65%, 4=98.88%, 10=0.42%, 20=0.01% 00:10:12.055 cpu : usr=99.40%, sys=0.10%, ctx=4, majf=0, minf=608 00:10:12.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:12.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.055 issued rwts: total=48573,48275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.055 00:10:12.055 Run status group 0 (all jobs): 00:10:12.055 READ: bw=94.8MiB/s (99.4MB/s), 94.8MiB/s-94.8MiB/s (99.4MB/s-99.4MB/s), io=190MiB (199MB), run=2001-2001msec 00:10:12.055 WRITE: bw=94.2MiB/s (98.8MB/s), 94.2MiB/s-94.2MiB/s (98.8MB/s-98.8MB/s), io=189MiB (198MB), run=2001-2001msec 00:10:12.055 ----------------------------------------------------- 00:10:12.055 Suppressions used: 00:10:12.055 count bytes template 00:10:12.055 1 32 /usr/src/fio/parse.c 00:10:12.055 1 8 libtcmalloc_minimal.so 00:10:12.055 ----------------------------------------------------- 00:10:12.055 00:10:12.055 10:47:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:12.055 10:47:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:12.055 10:47:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:12.055 10:47:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:12.055 10:47:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:12.055 10:47:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:12.314 10:47:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:12.314 10:47:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:12.314 10:47:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:12.572 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:12.572 fio-3.35 00:10:12.572 Starting 1 thread 00:10:16.757 00:10:16.757 test: (groupid=0, jobs=1): err= 0: pid=65465: Wed Nov 20 10:47:05 2024 00:10:16.757 read: IOPS=23.4k, BW=91.2MiB/s (95.7MB/s)(183MiB/2001msec) 00:10:16.757 slat (nsec): min=3752, max=46634, avg=4208.99, stdev=1007.14 00:10:16.757 clat (usec): min=201, max=11336, avg=2733.47, stdev=345.60 00:10:16.757 lat (usec): min=205, max=11383, avg=2737.68, stdev=346.00 00:10:16.757 clat percentiles (usec): 00:10:16.757 | 1.00th=[ 2376], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2573], 00:10:16.757 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:10:16.757 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 3032], 95.00th=[ 3228], 00:10:16.757 | 99.00th=[ 3785], 99.50th=[ 4555], 99.90th=[ 6259], 99.95th=[ 8029], 00:10:16.757 | 99.99th=[11076] 00:10:16.757 bw ( KiB/s): min=86552, max=96112, per=98.36%, avg=91888.00, stdev=4876.04, samples=3 00:10:16.757 iops : min=21638, max=24028, avg=22972.00, stdev=1219.01, samples=3 00:10:16.757 write: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(181MiB/2001msec); 0 zone resets 00:10:16.757 slat (nsec): min=3838, max=24165, avg=4486.34, stdev=932.81 00:10:16.757 clat (usec): min=178, max=11225, avg=2739.22, stdev=353.89 00:10:16.757 lat (usec): min=182, max=11246, avg=2743.71, stdev=354.24 00:10:16.757 clat percentiles (usec): 00:10:16.757 | 1.00th=[ 2376], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2573], 00:10:16.757 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:10:16.757 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 3032], 95.00th=[ 3261], 00:10:16.757 | 99.00th=[ 3818], 99.50th=[ 4621], 99.90th=[ 6521], 99.95th=[ 8455], 00:10:16.757 | 99.99th=[10814] 00:10:16.758 bw ( KiB/s): min=86144, max=95440, per=99.08%, avg=91989.33, stdev=5089.67, samples=3 00:10:16.758 iops : min=21536, max=23860, avg=22997.33, stdev=1272.42, samples=3 00:10:16.758 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:16.758 lat (msec) : 2=0.30%, 4=98.91%, 10=0.72%, 20=0.02% 00:10:16.758 cpu : usr=99.30%, sys=0.15%, ctx=3, majf=0, minf=607 00:10:16.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:16.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.758 issued rwts: total=46734,46443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.758 00:10:16.758 Run status group 0 (all jobs): 00:10:16.758 READ: bw=91.2MiB/s (95.7MB/s), 91.2MiB/s-91.2MiB/s (95.7MB/s-95.7MB/s), io=183MiB (191MB), run=2001-2001msec 00:10:16.758 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=181MiB (190MB), run=2001-2001msec 00:10:16.758 ----------------------------------------------------- 00:10:16.758 Suppressions used: 00:10:16.758 count bytes template 00:10:16.758 1 32 /usr/src/fio/parse.c 00:10:16.758 1 8 libtcmalloc_minimal.so 00:10:16.758 ----------------------------------------------------- 00:10:16.758 00:10:16.758 10:47:05 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:16.758 10:47:05 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:16.758 10:47:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:16.758 10:47:05 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:17.016 10:47:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:17.016 10:47:06 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:17.274 10:47:06 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:17.274 10:47:06 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:17.274 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:17.533 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:17.533 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:17.533 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:17.533 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:17.533 10:47:06 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:17.533 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:17.533 fio-3.35 00:10:17.533 Starting 1 thread 00:10:24.110 00:10:24.110 test: (groupid=0, jobs=1): err= 0: pid=65532: Wed Nov 20 10:47:12 2024 00:10:24.110 read: IOPS=23.6k, BW=92.2MiB/s (96.7MB/s)(185MiB/2001msec) 00:10:24.110 slat (nsec): min=3755, max=49924, avg=4224.28, stdev=1095.84 00:10:24.110 clat (usec): min=203, max=11085, avg=2705.41, stdev=310.93 00:10:24.110 lat (usec): min=207, max=11135, avg=2709.63, stdev=311.30 00:10:24.110 clat percentiles (usec): 00:10:24.110 | 1.00th=[ 2311], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:10:24.110 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:10:24.110 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 3130], 00:10:24.110 | 99.00th=[ 3621], 99.50th=[ 3916], 99.90th=[ 5669], 99.95th=[ 8225], 00:10:24.110 | 99.99th=[10814] 00:10:24.110 bw ( KiB/s): min=92392, max=94656, per=99.00%, avg=93501.33, stdev=1132.68, samples=3 00:10:24.110 iops : min=23098, max=23664, avg=23375.33, stdev=283.17, samples=3 00:10:24.110 write: IOPS=23.5k, BW=91.6MiB/s (96.1MB/s)(183MiB/2001msec); 0 zone resets 00:10:24.110 slat (nsec): min=3847, max=40392, avg=4480.79, stdev=1078.80 00:10:24.110 clat (usec): min=220, max=10910, avg=2709.54, stdev=314.86 00:10:24.110 lat (usec): min=224, max=10932, avg=2714.02, stdev=315.21 00:10:24.111 clat percentiles (usec): 00:10:24.111 | 1.00th=[ 2343], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:10:24.111 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:10:24.111 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2868], 95.00th=[ 3097], 00:10:24.111 | 99.00th=[ 3654], 99.50th=[ 3949], 99.90th=[ 6128], 99.95th=[ 8455], 00:10:24.111 | 99.99th=[10552] 00:10:24.111 bw ( KiB/s): min=92216, max=96232, per=99.78%, avg=93597.33, stdev=2282.59, samples=3 00:10:24.111 iops : min=23054, max=24058, avg=23399.33, stdev=570.65, samples=3 00:10:24.111 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:24.111 lat (msec) : 2=0.32%, 4=99.22%, 10=0.40%, 20=0.02% 00:10:24.111 cpu : usr=99.45%, sys=0.10%, ctx=5, majf=0, minf=605 00:10:24.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:24.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.111 issued rwts: total=47246,46926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.111 00:10:24.111 Run status group 0 (all jobs): 00:10:24.111 READ: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=185MiB (194MB), run=2001-2001msec 00:10:24.111 WRITE: bw=91.6MiB/s (96.1MB/s), 91.6MiB/s-91.6MiB/s (96.1MB/s-96.1MB/s), io=183MiB (192MB), run=2001-2001msec 00:10:24.111 ----------------------------------------------------- 00:10:24.111 Suppressions used: 00:10:24.111 count bytes template 00:10:24.111 1 32 /usr/src/fio/parse.c 00:10:24.111 1 8 libtcmalloc_minimal.so 00:10:24.111 ----------------------------------------------------- 00:10:24.111 00:10:24.111 10:47:12 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:24.111 10:47:12 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:24.111 00:10:24.111 real 0m21.258s 00:10:24.111 user 0m16.791s 00:10:24.111 sys 0m4.520s 00:10:24.111 10:47:12 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.111 10:47:12 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:24.111 ************************************ 00:10:24.111 END TEST nvme_fio 00:10:24.111 ************************************ 00:10:24.111 ************************************ 00:10:24.111 END TEST nvme 00:10:24.111 ************************************ 00:10:24.111 00:10:24.111 real 1m36.212s 00:10:24.111 user 3m44.517s 00:10:24.111 sys 0m23.762s 00:10:24.111 10:47:12 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.111 10:47:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:24.111 10:47:12 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:24.111 10:47:12 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:24.111 10:47:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.111 10:47:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.111 10:47:12 -- common/autotest_common.sh@10 -- # set +x 00:10:24.111 ************************************ 00:10:24.111 START TEST nvme_scc 00:10:24.111 ************************************ 00:10:24.111 10:47:12 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:24.111 * Looking for test storage... 00:10:24.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:24.111 10:47:12 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:24.111 10:47:12 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:24.111 10:47:12 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:24.111 10:47:12 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.111 10:47:12 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:24.111 10:47:13 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.111 10:47:13 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.111 --rc genhtml_branch_coverage=1 00:10:24.111 --rc genhtml_function_coverage=1 00:10:24.111 --rc genhtml_legend=1 00:10:24.111 --rc geninfo_all_blocks=1 00:10:24.111 --rc geninfo_unexecuted_blocks=1 00:10:24.111 00:10:24.111 ' 00:10:24.111 10:47:13 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.111 --rc genhtml_branch_coverage=1 00:10:24.111 --rc genhtml_function_coverage=1 00:10:24.111 --rc genhtml_legend=1 00:10:24.111 --rc geninfo_all_blocks=1 00:10:24.111 --rc geninfo_unexecuted_blocks=1 00:10:24.111 00:10:24.111 ' 00:10:24.111 10:47:13 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.111 --rc genhtml_branch_coverage=1 00:10:24.111 --rc genhtml_function_coverage=1 00:10:24.111 --rc genhtml_legend=1 00:10:24.111 --rc geninfo_all_blocks=1 00:10:24.111 --rc geninfo_unexecuted_blocks=1 00:10:24.111 00:10:24.111 ' 00:10:24.111 10:47:13 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.111 --rc genhtml_branch_coverage=1 00:10:24.111 --rc genhtml_function_coverage=1 00:10:24.111 --rc genhtml_legend=1 00:10:24.111 --rc geninfo_all_blocks=1 00:10:24.111 --rc geninfo_unexecuted_blocks=1 00:10:24.111 00:10:24.111 ' 00:10:24.111 10:47:13 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.111 10:47:13 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.111 10:47:13 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.111 10:47:13 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.111 10:47:13 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.111 10:47:13 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:24.111 10:47:13 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:24.111 10:47:13 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:24.112 10:47:13 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.112 10:47:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:24.112 10:47:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:24.112 10:47:13 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:24.112 10:47:13 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:24.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.628 Waiting for block devices as requested 00:10:24.887 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.887 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.887 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.145 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.424 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:30.424 10:47:19 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:30.424 10:47:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:30.424 10:47:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:30.424 10:47:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:30.424 10:47:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:30.424 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.425 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:30.426 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:30.427 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.428 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.429 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:30.430 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:30.431 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:30.432 10:47:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:30.432 10:47:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:30.432 10:47:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:30.432 10:47:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:30.432 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:30.433 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:30.434 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.435 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.436 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:30.437 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:30.438 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.439 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:30.440 10:47:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:30.440 10:47:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:30.440 10:47:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:30.440 10:47:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.440 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.441 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:30.442 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.443 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:30.444 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.445 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.446 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.447 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.448 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.449 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:30.712 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.712 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.712 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.712 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:30.713 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.714 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.715 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.716 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:30.717 10:47:19 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:30.717 10:47:19 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:30.717 10:47:19 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:30.717 10:47:19 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:30.717 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:30.718 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.719 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:30.720 10:47:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:30.720 10:47:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:30.721 10:47:19 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:30.721 10:47:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:30.721 10:47:19 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:30.721 10:47:19 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:31.289 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.226 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.226 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.226 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.226 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.226 10:47:21 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:32.226 10:47:21 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.226 10:47:21 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.226 10:47:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:32.226 ************************************ 00:10:32.226 START TEST nvme_simple_copy 00:10:32.226 ************************************ 00:10:32.226 10:47:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:32.486 Initializing NVMe Controllers 00:10:32.486 Attaching to 0000:00:10.0 00:10:32.486 Controller supports SCC. Attached to 0000:00:10.0 00:10:32.486 Namespace ID: 1 size: 6GB 00:10:32.486 Initialization complete. 00:10:32.486 00:10:32.486 Controller QEMU NVMe Ctrl (12340 ) 00:10:32.486 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:32.486 Namespace Block Size:4096 00:10:32.486 Writing LBAs 0 to 63 with Random Data 00:10:32.486 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:32.486 LBAs matching Written Data: 64 00:10:32.486 00:10:32.486 real 0m0.296s 00:10:32.486 user 0m0.101s 00:10:32.486 sys 0m0.094s 00:10:32.486 ************************************ 00:10:32.486 END TEST nvme_simple_copy 00:10:32.486 ************************************ 00:10:32.486 10:47:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.486 10:47:21 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:32.745 ************************************ 00:10:32.745 END TEST nvme_scc 00:10:32.745 ************************************ 00:10:32.745 00:10:32.745 real 0m8.944s 00:10:32.745 user 0m1.506s 00:10:32.745 sys 0m2.448s 00:10:32.745 10:47:21 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.745 10:47:21 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:32.745 10:47:21 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:32.745 10:47:21 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:32.745 10:47:21 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:32.745 10:47:21 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:32.745 10:47:21 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:32.745 10:47:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.745 10:47:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.745 10:47:21 -- common/autotest_common.sh@10 -- # set +x 00:10:32.745 ************************************ 00:10:32.745 START TEST nvme_fdp 00:10:32.745 ************************************ 00:10:32.745 10:47:21 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:32.745 * Looking for test storage... 00:10:32.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:32.745 10:47:21 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.745 10:47:21 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.745 10:47:21 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.005 10:47:22 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:33.005 10:47:22 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.005 10:47:22 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.005 --rc genhtml_branch_coverage=1 00:10:33.005 --rc genhtml_function_coverage=1 00:10:33.005 --rc genhtml_legend=1 00:10:33.005 --rc geninfo_all_blocks=1 00:10:33.005 --rc geninfo_unexecuted_blocks=1 00:10:33.005 00:10:33.005 ' 00:10:33.005 10:47:22 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.005 --rc genhtml_branch_coverage=1 00:10:33.005 --rc genhtml_function_coverage=1 00:10:33.005 --rc genhtml_legend=1 00:10:33.005 --rc geninfo_all_blocks=1 00:10:33.005 --rc geninfo_unexecuted_blocks=1 00:10:33.005 00:10:33.005 ' 00:10:33.005 10:47:22 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.005 --rc genhtml_branch_coverage=1 00:10:33.005 --rc genhtml_function_coverage=1 00:10:33.005 --rc genhtml_legend=1 00:10:33.005 --rc geninfo_all_blocks=1 00:10:33.005 --rc geninfo_unexecuted_blocks=1 00:10:33.005 00:10:33.005 ' 00:10:33.005 10:47:22 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.005 --rc genhtml_branch_coverage=1 00:10:33.005 --rc genhtml_function_coverage=1 00:10:33.005 --rc genhtml_legend=1 00:10:33.005 --rc geninfo_all_blocks=1 00:10:33.005 --rc geninfo_unexecuted_blocks=1 00:10:33.005 00:10:33.005 ' 00:10:33.005 10:47:22 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:33.005 10:47:22 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:33.005 10:47:22 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:33.005 10:47:22 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:33.005 10:47:22 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.005 10:47:22 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.005 10:47:22 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.005 10:47:22 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.006 10:47:22 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.006 10:47:22 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:33.006 10:47:22 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:33.006 10:47:22 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:33.006 10:47:22 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.006 10:47:22 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:33.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:33.833 Waiting for block devices as requested 00:10:33.833 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:33.833 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.093 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.093 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:39.372 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:39.372 10:47:28 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:39.372 10:47:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:39.372 10:47:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:39.372 10:47:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.372 10:47:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.372 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.373 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:39.374 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.375 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.376 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.377 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:39.378 10:47:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:39.378 10:47:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:39.378 10:47:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.378 10:47:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:39.378 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:39.379 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.380 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.381 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.382 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.383 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:39.384 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:39.651 10:47:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:39.651 10:47:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:39.651 10:47:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.651 10:47:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.651 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:39.652 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:39.653 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.654 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.655 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.656 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.657 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:39.658 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:39.659 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.660 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.661 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.662 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.663 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:39.664 10:47:28 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:39.664 10:47:28 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:39.664 10:47:28 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.664 10:47:28 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.664 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:39.925 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:39.926 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:39.927 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:39.928 10:47:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:39.928 10:47:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:39.929 10:47:28 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:39.929 10:47:28 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:39.929 10:47:28 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:39.929 10:47:28 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:40.497 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:41.433 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.433 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.433 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.433 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.433 10:47:30 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:41.433 10:47:30 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:41.433 10:47:30 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.433 10:47:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:41.433 ************************************ 00:10:41.433 START TEST nvme_flexible_data_placement 00:10:41.433 ************************************ 00:10:41.433 10:47:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:41.694 Initializing NVMe Controllers 00:10:41.694 Attaching to 0000:00:13.0 00:10:41.694 Controller supports FDP Attached to 0000:00:13.0 00:10:41.694 Namespace ID: 1 Endurance Group ID: 1 00:10:41.694 Initialization complete. 00:10:41.694 00:10:41.694 ================================== 00:10:41.694 == FDP tests for Namespace: #01 == 00:10:41.694 ================================== 00:10:41.694 00:10:41.694 Get Feature: FDP: 00:10:41.694 ================= 00:10:41.694 Enabled: Yes 00:10:41.694 FDP configuration Index: 0 00:10:41.694 00:10:41.694 FDP configurations log page 00:10:41.694 =========================== 00:10:41.694 Number of FDP configurations: 1 00:10:41.694 Version: 0 00:10:41.694 Size: 112 00:10:41.694 FDP Configuration Descriptor: 0 00:10:41.694 Descriptor Size: 96 00:10:41.694 Reclaim Group Identifier format: 2 00:10:41.694 FDP Volatile Write Cache: Not Present 00:10:41.694 FDP Configuration: Valid 00:10:41.694 Vendor Specific Size: 0 00:10:41.694 Number of Reclaim Groups: 2 00:10:41.694 Number of Recalim Unit Handles: 8 00:10:41.694 Max Placement Identifiers: 128 00:10:41.694 Number of Namespaces Suppprted: 256 00:10:41.694 Reclaim unit Nominal Size: 6000000 bytes 00:10:41.694 Estimated Reclaim Unit Time Limit: Not Reported 00:10:41.694 RUH Desc #000: RUH Type: Initially Isolated 00:10:41.694 RUH Desc #001: RUH Type: Initially Isolated 00:10:41.694 RUH Desc #002: RUH Type: Initially Isolated 00:10:41.694 RUH Desc #003: RUH Type: Initially Isolated 00:10:41.694 RUH Desc #004: RUH Type: Initially Isolated 00:10:41.694 RUH Desc #005: RUH Type: Initially Isolated 00:10:41.694 RUH Desc #006: RUH Type: Initially Isolated 00:10:41.694 RUH Desc #007: RUH Type: Initially Isolated 00:10:41.694 00:10:41.694 FDP reclaim unit handle usage log page 00:10:41.694 ====================================== 00:10:41.694 Number of Reclaim Unit Handles: 8 00:10:41.694 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:41.694 RUH Usage Desc #001: RUH Attributes: Unused 00:10:41.694 RUH Usage Desc #002: RUH Attributes: Unused 00:10:41.694 RUH Usage Desc #003: RUH Attributes: Unused 00:10:41.694 RUH Usage Desc #004: RUH Attributes: Unused 00:10:41.694 RUH Usage Desc #005: RUH Attributes: Unused 00:10:41.694 RUH Usage Desc #006: RUH Attributes: Unused 00:10:41.694 RUH Usage Desc #007: RUH Attributes: Unused 00:10:41.694 00:10:41.694 FDP statistics log page 00:10:41.694 ======================= 00:10:41.694 Host bytes with metadata written: 1021874176 00:10:41.694 Media bytes with metadata written: 1021988864 00:10:41.694 Media bytes erased: 0 00:10:41.694 00:10:41.694 FDP Reclaim unit handle status 00:10:41.694 ============================== 00:10:41.694 Number of RUHS descriptors: 2 00:10:41.694 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005177 00:10:41.694 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:41.694 00:10:41.694 FDP write on placement id: 0 success 00:10:41.694 00:10:41.694 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:41.694 00:10:41.694 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:41.694 00:10:41.694 Get Feature: FDP Events for Placement handle: #0 00:10:41.694 ======================== 00:10:41.694 Number of FDP Events: 6 00:10:41.694 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:41.694 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:41.694 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:41.694 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:41.694 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:41.694 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:41.694 00:10:41.694 FDP events log page 00:10:41.694 =================== 00:10:41.694 Number of FDP events: 1 00:10:41.694 FDP Event #0: 00:10:41.694 Event Type: RU Not Written to Capacity 00:10:41.694 Placement Identifier: Valid 00:10:41.694 NSID: Valid 00:10:41.694 Location: Valid 00:10:41.694 Placement Identifier: 0 00:10:41.694 Event Timestamp: 7 00:10:41.694 Namespace Identifier: 1 00:10:41.694 Reclaim Group Identifier: 0 00:10:41.694 Reclaim Unit Handle Identifier: 0 00:10:41.694 00:10:41.694 FDP test passed 00:10:41.694 ************************************ 00:10:41.694 END TEST nvme_flexible_data_placement 00:10:41.694 ************************************ 00:10:41.694 00:10:41.694 real 0m0.293s 00:10:41.694 user 0m0.092s 00:10:41.694 sys 0m0.098s 00:10:41.694 10:47:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.694 10:47:30 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:41.954 ************************************ 00:10:41.954 END TEST nvme_fdp 00:10:41.954 ************************************ 00:10:41.954 00:10:41.954 real 0m9.146s 00:10:41.954 user 0m1.628s 00:10:41.954 sys 0m2.569s 00:10:41.954 10:47:30 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.954 10:47:30 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:41.954 10:47:31 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:41.954 10:47:31 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:41.954 10:47:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.954 10:47:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.954 10:47:31 -- common/autotest_common.sh@10 -- # set +x 00:10:41.954 ************************************ 00:10:41.954 START TEST nvme_rpc 00:10:41.954 ************************************ 00:10:41.954 10:47:31 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:41.954 * Looking for test storage... 00:10:41.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:41.954 10:47:31 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.954 10:47:31 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.954 10:47:31 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.214 10:47:31 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:42.214 10:47:31 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.215 10:47:31 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.215 10:47:31 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.215 10:47:31 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.215 --rc genhtml_branch_coverage=1 00:10:42.215 --rc genhtml_function_coverage=1 00:10:42.215 --rc genhtml_legend=1 00:10:42.215 --rc geninfo_all_blocks=1 00:10:42.215 --rc geninfo_unexecuted_blocks=1 00:10:42.215 00:10:42.215 ' 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.215 --rc genhtml_branch_coverage=1 00:10:42.215 --rc genhtml_function_coverage=1 00:10:42.215 --rc genhtml_legend=1 00:10:42.215 --rc geninfo_all_blocks=1 00:10:42.215 --rc geninfo_unexecuted_blocks=1 00:10:42.215 00:10:42.215 ' 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.215 --rc genhtml_branch_coverage=1 00:10:42.215 --rc genhtml_function_coverage=1 00:10:42.215 --rc genhtml_legend=1 00:10:42.215 --rc geninfo_all_blocks=1 00:10:42.215 --rc geninfo_unexecuted_blocks=1 00:10:42.215 00:10:42.215 ' 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.215 --rc genhtml_branch_coverage=1 00:10:42.215 --rc genhtml_function_coverage=1 00:10:42.215 --rc genhtml_legend=1 00:10:42.215 --rc geninfo_all_blocks=1 00:10:42.215 --rc geninfo_unexecuted_blocks=1 00:10:42.215 00:10:42.215 ' 00:10:42.215 10:47:31 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.215 10:47:31 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:42.215 10:47:31 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:42.215 10:47:31 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66936 00:10:42.215 10:47:31 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:42.215 10:47:31 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:42.215 10:47:31 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66936 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 66936 ']' 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.215 10:47:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.475 [2024-11-20 10:47:31.485285] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:42.475 [2024-11-20 10:47:31.485649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66936 ] 00:10:42.475 [2024-11-20 10:47:31.669872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.735 [2024-11-20 10:47:31.784264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.735 [2024-11-20 10:47:31.784301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.672 10:47:32 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.672 10:47:32 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:43.672 10:47:32 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:43.672 Nvme0n1 00:10:43.672 10:47:32 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:43.672 10:47:32 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:43.931 request: 00:10:43.931 { 00:10:43.931 "bdev_name": "Nvme0n1", 00:10:43.931 "filename": "non_existing_file", 00:10:43.931 "method": "bdev_nvme_apply_firmware", 00:10:43.931 "req_id": 1 00:10:43.931 } 00:10:43.931 Got JSON-RPC error response 00:10:43.931 response: 00:10:43.931 { 00:10:43.931 "code": -32603, 00:10:43.931 "message": "open file failed." 00:10:43.931 } 00:10:43.931 10:47:33 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:43.931 10:47:33 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:43.931 10:47:33 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:44.190 10:47:33 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:44.190 10:47:33 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66936 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 66936 ']' 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 66936 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66936 00:10:44.190 killing process with pid 66936 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66936' 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@973 -- # kill 66936 00:10:44.190 10:47:33 nvme_rpc -- common/autotest_common.sh@978 -- # wait 66936 00:10:46.727 ************************************ 00:10:46.727 END TEST nvme_rpc 00:10:46.727 ************************************ 00:10:46.727 00:10:46.727 real 0m4.537s 00:10:46.727 user 0m8.224s 00:10:46.727 sys 0m0.778s 00:10:46.728 10:47:35 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.728 10:47:35 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.728 10:47:35 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:46.728 10:47:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.728 10:47:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.728 10:47:35 -- common/autotest_common.sh@10 -- # set +x 00:10:46.728 ************************************ 00:10:46.728 START TEST nvme_rpc_timeouts 00:10:46.728 ************************************ 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:46.728 * Looking for test storage... 00:10:46.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.728 10:47:35 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.728 --rc genhtml_branch_coverage=1 00:10:46.728 --rc genhtml_function_coverage=1 00:10:46.728 --rc genhtml_legend=1 00:10:46.728 --rc geninfo_all_blocks=1 00:10:46.728 --rc geninfo_unexecuted_blocks=1 00:10:46.728 00:10:46.728 ' 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.728 --rc genhtml_branch_coverage=1 00:10:46.728 --rc genhtml_function_coverage=1 00:10:46.728 --rc genhtml_legend=1 00:10:46.728 --rc geninfo_all_blocks=1 00:10:46.728 --rc geninfo_unexecuted_blocks=1 00:10:46.728 00:10:46.728 ' 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.728 --rc genhtml_branch_coverage=1 00:10:46.728 --rc genhtml_function_coverage=1 00:10:46.728 --rc genhtml_legend=1 00:10:46.728 --rc geninfo_all_blocks=1 00:10:46.728 --rc geninfo_unexecuted_blocks=1 00:10:46.728 00:10:46.728 ' 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.728 --rc genhtml_branch_coverage=1 00:10:46.728 --rc genhtml_function_coverage=1 00:10:46.728 --rc genhtml_legend=1 00:10:46.728 --rc geninfo_all_blocks=1 00:10:46.728 --rc geninfo_unexecuted_blocks=1 00:10:46.728 00:10:46.728 ' 00:10:46.728 10:47:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.728 10:47:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67012 00:10:46.728 10:47:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67012 00:10:46.728 10:47:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:46.728 10:47:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67044 00:10:46.728 10:47:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:46.728 10:47:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67044 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67044 ']' 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.728 10:47:35 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:46.728 [2024-11-20 10:47:35.972245] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:10:46.728 [2024-11-20 10:47:35.972650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67044 ] 00:10:46.987 [2024-11-20 10:47:36.155757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:47.247 [2024-11-20 10:47:36.268725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.247 [2024-11-20 10:47:36.268759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.236 10:47:37 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.236 Checking default timeout settings: 00:10:48.236 10:47:37 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:48.236 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:48.236 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:48.236 Making settings changes with rpc: 00:10:48.236 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:48.236 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:48.495 Check default vs. modified settings: 00:10:48.495 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:48.495 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:48.755 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:48.755 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:48.755 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67012 00:10:48.755 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:48.755 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:48.755 10:47:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:48.755 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67012 00:10:48.755 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:48.755 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.014 Setting action_on_timeout is changed as expected. 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67012 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67012 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.014 Setting timeout_us is changed as expected. 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67012 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67012 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.014 Setting timeout_admin_us is changed as expected. 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:49.014 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67012 /tmp/settings_modified_67012 00:10:49.015 10:47:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67044 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67044 ']' 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67044 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67044 00:10:49.015 killing process with pid 67044 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67044' 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67044 00:10:49.015 10:47:38 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67044 00:10:51.553 RPC TIMEOUT SETTING TEST PASSED. 00:10:51.553 10:47:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:51.553 ************************************ 00:10:51.553 END TEST nvme_rpc_timeouts 00:10:51.553 ************************************ 00:10:51.553 00:10:51.553 real 0m4.834s 00:10:51.554 user 0m9.105s 00:10:51.554 sys 0m0.776s 00:10:51.554 10:47:40 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.554 10:47:40 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:51.554 10:47:40 -- spdk/autotest.sh@239 -- # uname -s 00:10:51.554 10:47:40 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:51.554 10:47:40 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:51.554 10:47:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.554 10:47:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.554 10:47:40 -- common/autotest_common.sh@10 -- # set +x 00:10:51.554 ************************************ 00:10:51.554 START TEST sw_hotplug 00:10:51.554 ************************************ 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:51.554 * Looking for test storage... 00:10:51.554 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.554 10:47:40 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.554 --rc genhtml_branch_coverage=1 00:10:51.554 --rc genhtml_function_coverage=1 00:10:51.554 --rc genhtml_legend=1 00:10:51.554 --rc geninfo_all_blocks=1 00:10:51.554 --rc geninfo_unexecuted_blocks=1 00:10:51.554 00:10:51.554 ' 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.554 --rc genhtml_branch_coverage=1 00:10:51.554 --rc genhtml_function_coverage=1 00:10:51.554 --rc genhtml_legend=1 00:10:51.554 --rc geninfo_all_blocks=1 00:10:51.554 --rc geninfo_unexecuted_blocks=1 00:10:51.554 00:10:51.554 ' 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.554 --rc genhtml_branch_coverage=1 00:10:51.554 --rc genhtml_function_coverage=1 00:10:51.554 --rc genhtml_legend=1 00:10:51.554 --rc geninfo_all_blocks=1 00:10:51.554 --rc geninfo_unexecuted_blocks=1 00:10:51.554 00:10:51.554 ' 00:10:51.554 10:47:40 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.554 --rc genhtml_branch_coverage=1 00:10:51.554 --rc genhtml_function_coverage=1 00:10:51.554 --rc genhtml_legend=1 00:10:51.554 --rc geninfo_all_blocks=1 00:10:51.554 --rc geninfo_unexecuted_blocks=1 00:10:51.554 00:10:51.554 ' 00:10:51.554 10:47:40 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:52.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:52.383 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:52.383 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:52.383 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:52.383 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:52.383 10:47:41 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:52.383 10:47:41 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:52.383 10:47:41 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:52.383 10:47:41 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:52.383 10:47:41 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:52.643 10:47:41 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:52.643 10:47:41 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:52.643 10:47:41 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:52.643 10:47:41 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:53.211 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:53.471 Waiting for block devices as requested 00:10:53.471 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.471 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.730 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.730 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:59.003 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:59.003 10:47:48 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:59.003 10:47:48 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:59.571 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:59.571 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:59.571 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:59.830 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:00.399 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:00.399 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:00.399 10:47:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67940 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:00.399 10:47:49 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:00.399 10:47:49 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:00.399 10:47:49 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:00.399 10:47:49 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:00.399 10:47:49 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:00.399 10:47:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:00.658 Initializing NVMe Controllers 00:11:00.658 Attaching to 0000:00:10.0 00:11:00.658 Attaching to 0000:00:11.0 00:11:00.658 Attached to 0000:00:10.0 00:11:00.658 Attached to 0000:00:11.0 00:11:00.658 Initialization complete. Starting I/O... 00:11:00.658 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:00.658 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:00.658 00:11:01.596 QEMU NVMe Ctrl (12340 ): 1592 I/Os completed (+1592) 00:11:01.596 QEMU NVMe Ctrl (12341 ): 1592 I/Os completed (+1592) 00:11:01.596 00:11:02.994 QEMU NVMe Ctrl (12340 ): 3760 I/Os completed (+2168) 00:11:02.994 QEMU NVMe Ctrl (12341 ): 3760 I/Os completed (+2168) 00:11:02.994 00:11:03.930 QEMU NVMe Ctrl (12340 ): 5960 I/Os completed (+2200) 00:11:03.930 QEMU NVMe Ctrl (12341 ): 5989 I/Os completed (+2229) 00:11:03.930 00:11:04.867 QEMU NVMe Ctrl (12340 ): 8188 I/Os completed (+2228) 00:11:04.867 QEMU NVMe Ctrl (12341 ): 8221 I/Os completed (+2232) 00:11:04.867 00:11:05.805 QEMU NVMe Ctrl (12340 ): 10408 I/Os completed (+2220) 00:11:05.805 QEMU NVMe Ctrl (12341 ): 10441 I/Os completed (+2220) 00:11:05.805 00:11:06.371 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.371 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.371 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.630 [2024-11-20 10:47:55.624861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.630 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:06.630 [2024-11-20 10:47:55.626729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.626935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.626992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.627181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:06.630 [2024-11-20 10:47:55.630252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.630390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.630451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.630560] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:11:06.630 EAL: Scan for (pci) bus failed. 00:11:06.630 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.630 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.630 [2024-11-20 10:47:55.663789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.630 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:06.630 [2024-11-20 10:47:55.665532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.665803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.665837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.665857] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:06.630 [2024-11-20 10:47:55.668464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.668507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.668528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 [2024-11-20 10:47:55.668544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.630 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:06.630 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.630 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:06.630 EAL: Scan for (pci) bus failed. 00:11:06.630 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.630 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.630 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.630 00:11:06.630 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.888 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.888 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.888 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.888 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.888 Attaching to 0000:00:10.0 00:11:06.888 Attached to 0000:00:10.0 00:11:06.888 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:06.888 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.888 10:47:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.888 Attaching to 0000:00:11.0 00:11:06.888 Attached to 0000:00:11.0 00:11:07.822 QEMU NVMe Ctrl (12340 ): 2092 I/Os completed (+2092) 00:11:07.822 QEMU NVMe Ctrl (12341 ): 1856 I/Os completed (+1856) 00:11:07.822 00:11:08.760 QEMU NVMe Ctrl (12340 ): 4324 I/Os completed (+2232) 00:11:08.760 QEMU NVMe Ctrl (12341 ): 4088 I/Os completed (+2232) 00:11:08.760 00:11:09.699 QEMU NVMe Ctrl (12340 ): 6464 I/Os completed (+2140) 00:11:09.699 QEMU NVMe Ctrl (12341 ): 6231 I/Os completed (+2143) 00:11:09.699 00:11:10.636 QEMU NVMe Ctrl (12340 ): 8664 I/Os completed (+2200) 00:11:10.636 QEMU NVMe Ctrl (12341 ): 8431 I/Os completed (+2200) 00:11:10.636 00:11:12.016 QEMU NVMe Ctrl (12340 ): 10868 I/Os completed (+2204) 00:11:12.016 QEMU NVMe Ctrl (12341 ): 10635 I/Os completed (+2204) 00:11:12.016 00:11:12.583 QEMU NVMe Ctrl (12340 ): 13084 I/Os completed (+2216) 00:11:12.583 QEMU NVMe Ctrl (12341 ): 12851 I/Os completed (+2216) 00:11:12.583 00:11:13.960 QEMU NVMe Ctrl (12340 ): 15304 I/Os completed (+2220) 00:11:13.960 QEMU NVMe Ctrl (12341 ): 15071 I/Os completed (+2220) 00:11:13.960 00:11:14.898 QEMU NVMe Ctrl (12340 ): 17516 I/Os completed (+2212) 00:11:14.898 QEMU NVMe Ctrl (12341 ): 17283 I/Os completed (+2212) 00:11:14.898 00:11:15.859 QEMU NVMe Ctrl (12340 ): 19732 I/Os completed (+2216) 00:11:15.859 QEMU NVMe Ctrl (12341 ): 19499 I/Os completed (+2216) 00:11:15.859 00:11:16.794 QEMU NVMe Ctrl (12340 ): 21956 I/Os completed (+2224) 00:11:16.794 QEMU NVMe Ctrl (12341 ): 21723 I/Os completed (+2224) 00:11:16.794 00:11:17.729 QEMU NVMe Ctrl (12340 ): 24172 I/Os completed (+2216) 00:11:17.729 QEMU NVMe Ctrl (12341 ): 23939 I/Os completed (+2216) 00:11:17.729 00:11:18.666 QEMU NVMe Ctrl (12340 ): 26336 I/Os completed (+2164) 00:11:18.666 QEMU NVMe Ctrl (12341 ): 26106 I/Os completed (+2167) 00:11:18.666 00:11:18.925 10:48:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:18.925 10:48:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:18.925 10:48:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.925 10:48:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.925 [2024-11-20 10:48:08.000231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:18.925 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:18.925 [2024-11-20 10:48:08.002134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.002287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.002339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.002452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:18.925 [2024-11-20 10:48:08.005406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.005549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.005618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.005742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.925 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.925 [2024-11-20 10:48:08.039525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:18.925 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:18.925 [2024-11-20 10:48:08.041280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.041326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.041357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.925 [2024-11-20 10:48:08.041376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.926 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:18.926 [2024-11-20 10:48:08.043912] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.926 [2024-11-20 10:48:08.043950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.926 [2024-11-20 10:48:08.043970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.926 [2024-11-20 10:48:08.043990] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.926 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:18.926 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:18.926 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:18.926 EAL: Scan for (pci) bus failed. 00:11:18.926 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.926 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.926 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.185 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.185 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.185 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.185 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.185 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.185 Attaching to 0000:00:10.0 00:11:19.185 Attached to 0000:00:10.0 00:11:19.185 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.185 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.185 10:48:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.185 Attaching to 0000:00:11.0 00:11:19.185 Attached to 0000:00:11.0 00:11:19.752 QEMU NVMe Ctrl (12340 ): 1216 I/Os completed (+1216) 00:11:19.752 QEMU NVMe Ctrl (12341 ): 988 I/Os completed (+988) 00:11:19.752 00:11:20.690 QEMU NVMe Ctrl (12340 ): 3388 I/Os completed (+2172) 00:11:20.690 QEMU NVMe Ctrl (12341 ): 3162 I/Os completed (+2174) 00:11:20.690 00:11:21.627 QEMU NVMe Ctrl (12340 ): 5564 I/Os completed (+2176) 00:11:21.627 QEMU NVMe Ctrl (12341 ): 5338 I/Os completed (+2176) 00:11:21.627 00:11:22.563 QEMU NVMe Ctrl (12340 ): 7756 I/Os completed (+2192) 00:11:22.563 QEMU NVMe Ctrl (12341 ): 7530 I/Os completed (+2192) 00:11:22.563 00:11:23.995 QEMU NVMe Ctrl (12340 ): 9944 I/Os completed (+2188) 00:11:23.995 QEMU NVMe Ctrl (12341 ): 9719 I/Os completed (+2189) 00:11:23.995 00:11:24.560 QEMU NVMe Ctrl (12340 ): 12140 I/Os completed (+2196) 00:11:24.560 QEMU NVMe Ctrl (12341 ): 11915 I/Os completed (+2196) 00:11:24.561 00:11:25.934 QEMU NVMe Ctrl (12340 ): 14316 I/Os completed (+2176) 00:11:25.934 QEMU NVMe Ctrl (12341 ): 14091 I/Os completed (+2176) 00:11:25.934 00:11:26.867 QEMU NVMe Ctrl (12340 ): 16504 I/Os completed (+2188) 00:11:26.867 QEMU NVMe Ctrl (12341 ): 16279 I/Os completed (+2188) 00:11:26.867 00:11:27.801 QEMU NVMe Ctrl (12340 ): 18696 I/Os completed (+2192) 00:11:27.801 QEMU NVMe Ctrl (12341 ): 18471 I/Os completed (+2192) 00:11:27.801 00:11:28.735 QEMU NVMe Ctrl (12340 ): 20888 I/Os completed (+2192) 00:11:28.735 QEMU NVMe Ctrl (12341 ): 20663 I/Os completed (+2192) 00:11:28.735 00:11:29.671 QEMU NVMe Ctrl (12340 ): 23080 I/Os completed (+2192) 00:11:29.671 QEMU NVMe Ctrl (12341 ): 22855 I/Os completed (+2192) 00:11:29.671 00:11:30.606 QEMU NVMe Ctrl (12340 ): 25260 I/Os completed (+2180) 00:11:30.606 QEMU NVMe Ctrl (12341 ): 25035 I/Os completed (+2180) 00:11:30.606 00:11:31.172 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:31.172 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.172 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.172 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.172 [2024-11-20 10:48:20.369082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:31.172 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:31.172 [2024-11-20 10:48:20.370927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.371096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.371150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.371246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:31.172 [2024-11-20 10:48:20.374144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.374284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.374335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.374456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.172 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.172 [2024-11-20 10:48:20.404864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:31.172 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:31.172 [2024-11-20 10:48:20.406557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.406654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.406764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.406790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:31.172 [2024-11-20 10:48:20.409236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.409276] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.409298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.172 [2024-11-20 10:48:20.409315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.429 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:31.429 EAL: Scan for (pci) bus failed. 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.429 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.429 Attaching to 0000:00:10.0 00:11:31.429 Attached to 0000:00:10.0 00:11:31.687 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.687 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.687 10:48:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.687 Attaching to 0000:00:11.0 00:11:31.687 Attached to 0000:00:11.0 00:11:31.687 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:31.687 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:31.687 [2024-11-20 10:48:20.744218] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:43.960 10:48:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:43.960 10:48:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.960 10:48:32 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.12 00:11:43.960 10:48:32 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.12 00:11:43.960 10:48:32 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:43.960 10:48:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.12 00:11:43.960 10:48:32 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.12 2 00:11:43.960 remove_attach_helper took 43.12s to complete (handling 2 nvme drive(s)) 10:48:32 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67940 00:11:50.541 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67940) - No such process 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67940 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68484 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:50.541 10:48:38 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68484 00:11:50.541 10:48:38 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68484 ']' 00:11:50.541 10:48:38 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.541 10:48:38 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.541 10:48:38 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.541 10:48:38 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.541 10:48:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.541 [2024-11-20 10:48:38.865068] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:11:50.541 [2024-11-20 10:48:38.865409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68484 ] 00:11:50.541 [2024-11-20 10:48:39.047611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.541 [2024-11-20 10:48:39.157490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:50.799 10:48:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:50.799 10:48:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.366 10:48:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.366 10:48:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.366 [2024-11-20 10:48:46.107481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:57.366 [2024-11-20 10:48:46.109948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.366 [2024-11-20 10:48:46.109995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.366 [2024-11-20 10:48:46.110015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.366 [2024-11-20 10:48:46.110043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.366 [2024-11-20 10:48:46.110055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.366 [2024-11-20 10:48:46.110070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.366 [2024-11-20 10:48:46.110083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.366 [2024-11-20 10:48:46.110097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.366 [2024-11-20 10:48:46.110109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.366 [2024-11-20 10:48:46.110129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.366 [2024-11-20 10:48:46.110140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.366 [2024-11-20 10:48:46.110155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.366 10:48:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:57.366 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:57.366 [2024-11-20 10:48:46.506832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:57.366 [2024-11-20 10:48:46.509301] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.366 [2024-11-20 10:48:46.509342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.366 [2024-11-20 10:48:46.509378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.366 [2024-11-20 10:48:46.509401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.366 [2024-11-20 10:48:46.509415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.366 [2024-11-20 10:48:46.509428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.366 [2024-11-20 10:48:46.509443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.366 [2024-11-20 10:48:46.509454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.366 [2024-11-20 10:48:46.509469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.366 [2024-11-20 10:48:46.509481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.366 [2024-11-20 10:48:46.509495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.366 [2024-11-20 10:48:46.509507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.625 10:48:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.625 10:48:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.625 10:48:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.625 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:57.883 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:57.883 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.884 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.884 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.884 10:48:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:57.884 10:48:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:57.884 10:48:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.884 10:48:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.097 10:48:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.097 10:48:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.097 10:48:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.097 10:48:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.097 10:48:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.097 [2024-11-20 10:48:59.186432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:10.097 [2024-11-20 10:48:59.189063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.097 [2024-11-20 10:48:59.189213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.097 [2024-11-20 10:48:59.189383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.097 [2024-11-20 10:48:59.189456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.097 [2024-11-20 10:48:59.189537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.097 [2024-11-20 10:48:59.189608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.097 [2024-11-20 10:48:59.189701] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.097 [2024-11-20 10:48:59.189742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.097 [2024-11-20 10:48:59.189792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.097 [2024-11-20 10:48:59.189899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.097 [2024-11-20 10:48:59.189933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.097 [2024-11-20 10:48:59.189986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.097 10:48:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:10.097 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:10.356 [2024-11-20 10:48:59.585786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:10.356 [2024-11-20 10:48:59.588446] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.356 [2024-11-20 10:48:59.588611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.356 [2024-11-20 10:48:59.588742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.356 [2024-11-20 10:48:59.588808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.356 [2024-11-20 10:48:59.588845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.356 [2024-11-20 10:48:59.588951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.356 [2024-11-20 10:48:59.589010] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.356 [2024-11-20 10:48:59.589044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.356 [2024-11-20 10:48:59.589211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.356 [2024-11-20 10:48:59.589307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.356 [2024-11-20 10:48:59.589347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.356 [2024-11-20 10:48:59.589397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.615 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:10.615 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.615 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.615 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.615 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.615 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.615 10:48:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.615 10:48:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.615 10:48:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.615 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:10.615 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:10.875 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:10.875 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:10.875 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:10.875 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:10.875 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:10.875 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:10.875 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:10.875 10:48:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:10.875 10:49:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:10.875 10:49:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:10.875 10:49:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.081 10:49:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.081 10:49:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.081 10:49:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:23.081 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:23.082 [2024-11-20 10:49:12.165612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:23.082 [2024-11-20 10:49:12.168606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.082 [2024-11-20 10:49:12.168692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.082 [2024-11-20 10:49:12.168756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.082 [2024-11-20 10:49:12.168825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.082 [2024-11-20 10:49:12.168860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.082 [2024-11-20 10:49:12.168914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.082 [2024-11-20 10:49:12.168965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.082 [2024-11-20 10:49:12.168999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.082 [2024-11-20 10:49:12.169111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.082 [2024-11-20 10:49:12.169170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.082 [2024-11-20 10:49:12.169203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.082 [2024-11-20 10:49:12.169254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.082 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:23.082 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:23.082 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:23.082 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.082 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.082 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.082 10:49:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.082 10:49:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.082 10:49:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.082 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:23.082 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:23.341 [2024-11-20 10:49:12.564946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:23.341 [2024-11-20 10:49:12.567416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.341 [2024-11-20 10:49:12.567456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.341 [2024-11-20 10:49:12.567474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.341 [2024-11-20 10:49:12.567493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.341 [2024-11-20 10:49:12.567507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.341 [2024-11-20 10:49:12.567519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.341 [2024-11-20 10:49:12.567534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.341 [2024-11-20 10:49:12.567545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.341 [2024-11-20 10:49:12.567562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.341 [2024-11-20 10:49:12.567574] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.341 [2024-11-20 10:49:12.567587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.341 [2024-11-20 10:49:12.567614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.599 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:23.599 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:23.599 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:23.599 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.599 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.599 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.599 10:49:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.599 10:49:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.599 10:49:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.599 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:23.599 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:23.887 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:23.887 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:23.887 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:23.887 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:23.887 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:23.887 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:23.887 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:23.887 10:49:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:23.887 10:49:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:23.887 10:49:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:23.887 10:49:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.11 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.11 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.11 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.11 2 00:12:36.093 remove_attach_helper took 45.11s to complete (handling 2 nvme drive(s)) 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:36.093 10:49:25 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:36.093 10:49:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:42.660 10:49:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.660 10:49:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:42.660 [2024-11-20 10:49:31.257571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:42.660 [2024-11-20 10:49:31.259327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.660 [2024-11-20 10:49:31.259484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.660 [2024-11-20 10:49:31.259635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.660 [2024-11-20 10:49:31.259707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.660 [2024-11-20 10:49:31.259781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.660 [2024-11-20 10:49:31.259840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.660 [2024-11-20 10:49:31.259924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.660 [2024-11-20 10:49:31.259966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.660 [2024-11-20 10:49:31.260017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.660 [2024-11-20 10:49:31.260107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.660 [2024-11-20 10:49:31.260123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.660 [2024-11-20 10:49:31.260141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.660 10:49:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:42.660 [2024-11-20 10:49:31.656921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:42.660 [2024-11-20 10:49:31.658690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.660 [2024-11-20 10:49:31.658727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.660 [2024-11-20 10:49:31.658761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.660 [2024-11-20 10:49:31.658782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.660 [2024-11-20 10:49:31.658807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.660 [2024-11-20 10:49:31.658819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.660 [2024-11-20 10:49:31.658834] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.660 [2024-11-20 10:49:31.658845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.660 [2024-11-20 10:49:31.658859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.660 [2024-11-20 10:49:31.658873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.660 [2024-11-20 10:49:31.658886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.660 [2024-11-20 10:49:31.658898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:42.660 10:49:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:42.660 10:49:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:42.660 10:49:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:42.660 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:42.919 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.919 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.919 10:49:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:42.919 10:49:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:42.919 10:49:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.919 10:49:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.919 10:49:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.919 10:49:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:42.919 10:49:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:42.919 10:49:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.919 10:49:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:55.143 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:55.143 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:55.143 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:55.143 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:55.143 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:55.143 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:55.143 10:49:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.143 10:49:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:55.143 10:49:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:55.144 [2024-11-20 10:49:44.236706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:55.144 [2024-11-20 10:49:44.238826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.144 [2024-11-20 10:49:44.238986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.144 [2024-11-20 10:49:44.239100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.144 [2024-11-20 10:49:44.239210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.144 [2024-11-20 10:49:44.239247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.144 [2024-11-20 10:49:44.239339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.144 [2024-11-20 10:49:44.239436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.144 [2024-11-20 10:49:44.239477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.144 [2024-11-20 10:49:44.239574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.144 [2024-11-20 10:49:44.239687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.144 [2024-11-20 10:49:44.239722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.144 [2024-11-20 10:49:44.239811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:55.144 10:49:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.144 10:49:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:55.144 10:49:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:55.144 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:55.442 [2024-11-20 10:49:44.636055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:55.442 [2024-11-20 10:49:44.637648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.442 [2024-11-20 10:49:44.637685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.442 [2024-11-20 10:49:44.637705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.442 [2024-11-20 10:49:44.637726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.442 [2024-11-20 10:49:44.637744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.442 [2024-11-20 10:49:44.637757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.442 [2024-11-20 10:49:44.637773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.442 [2024-11-20 10:49:44.637784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.442 [2024-11-20 10:49:44.637799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.442 [2024-11-20 10:49:44.637812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.442 [2024-11-20 10:49:44.637825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.442 [2024-11-20 10:49:44.637837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.701 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:55.701 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:55.701 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:55.701 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:55.701 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:55.701 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:55.701 10:49:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.701 10:49:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:55.701 10:49:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.701 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:55.701 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:55.958 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:55.958 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:55.958 10:49:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:55.958 10:49:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:55.958 10:49:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:55.958 10:49:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:55.958 10:49:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:55.958 10:49:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:55.958 10:49:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:56.215 10:49:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.215 10:49:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:08.407 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:08.407 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:08.407 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:08.407 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.407 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.407 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.407 10:49:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.408 10:49:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.408 10:49:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.408 [2024-11-20 10:49:57.315674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.408 [2024-11-20 10:49:57.317871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.408 [2024-11-20 10:49:57.317961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.408 [2024-11-20 10:49:57.318026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.408 [2024-11-20 10:49:57.318089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.408 [2024-11-20 10:49:57.318122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.408 [2024-11-20 10:49:57.318272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.408 [2024-11-20 10:49:57.318332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.408 [2024-11-20 10:49:57.318387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.408 [2024-11-20 10:49:57.318555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.408 [2024-11-20 10:49:57.318629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.408 [2024-11-20 10:49:57.318663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.408 [2024-11-20 10:49:57.318716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.408 10:49:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.408 10:49:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.408 10:49:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:08.408 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:08.666 [2024-11-20 10:49:57.715022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:08.666 [2024-11-20 10:49:57.716769] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.666 [2024-11-20 10:49:57.716942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.666 [2024-11-20 10:49:57.717100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.666 [2024-11-20 10:49:57.717217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.666 [2024-11-20 10:49:57.717259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.666 [2024-11-20 10:49:57.717393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.666 [2024-11-20 10:49:57.717522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.666 [2024-11-20 10:49:57.717740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.666 [2024-11-20 10:49:57.717804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.666 [2024-11-20 10:49:57.717856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.666 [2024-11-20 10:49:57.717946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.666 [2024-11-20 10:49:57.718002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.666 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:08.666 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:08.666 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:08.666 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.666 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.666 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.666 10:49:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.666 10:49:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.666 10:49:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.925 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:08.925 10:49:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:08.925 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:08.925 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:08.925 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:08.925 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:08.925 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:08.925 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:08.925 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:08.925 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:09.184 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:09.184 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.184 10:49:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:21.450 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.14 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.14 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:13:21.451 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:21.451 10:50:10 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68484 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68484 ']' 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68484 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68484 00:13:21.451 killing process with pid 68484 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68484' 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68484 00:13:21.451 10:50:10 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68484 00:13:23.984 10:50:12 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:24.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.811 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.811 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.811 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.811 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:25.070 ************************************ 00:13:25.070 END TEST sw_hotplug 00:13:25.070 ************************************ 00:13:25.070 00:13:25.070 real 2m33.538s 00:13:25.070 user 1m50.952s 00:13:25.070 sys 0m22.714s 00:13:25.070 10:50:14 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.070 10:50:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.070 10:50:14 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:25.070 10:50:14 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:25.070 10:50:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.070 10:50:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.070 10:50:14 -- common/autotest_common.sh@10 -- # set +x 00:13:25.070 ************************************ 00:13:25.070 START TEST nvme_xnvme 00:13:25.070 ************************************ 00:13:25.070 10:50:14 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:25.070 * Looking for test storage... 00:13:25.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:25.070 10:50:14 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:25.070 10:50:14 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:25.070 10:50:14 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.332 10:50:14 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.332 --rc genhtml_branch_coverage=1 00:13:25.332 --rc genhtml_function_coverage=1 00:13:25.332 --rc genhtml_legend=1 00:13:25.332 --rc geninfo_all_blocks=1 00:13:25.332 --rc geninfo_unexecuted_blocks=1 00:13:25.332 00:13:25.332 ' 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.332 --rc genhtml_branch_coverage=1 00:13:25.332 --rc genhtml_function_coverage=1 00:13:25.332 --rc genhtml_legend=1 00:13:25.332 --rc geninfo_all_blocks=1 00:13:25.332 --rc geninfo_unexecuted_blocks=1 00:13:25.332 00:13:25.332 ' 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.332 --rc genhtml_branch_coverage=1 00:13:25.332 --rc genhtml_function_coverage=1 00:13:25.332 --rc genhtml_legend=1 00:13:25.332 --rc geninfo_all_blocks=1 00:13:25.332 --rc geninfo_unexecuted_blocks=1 00:13:25.332 00:13:25.332 ' 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:25.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.332 --rc genhtml_branch_coverage=1 00:13:25.332 --rc genhtml_function_coverage=1 00:13:25.332 --rc genhtml_legend=1 00:13:25.332 --rc geninfo_all_blocks=1 00:13:25.332 --rc geninfo_unexecuted_blocks=1 00:13:25.332 00:13:25.332 ' 00:13:25.332 10:50:14 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:25.332 10:50:14 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:25.332 10:50:14 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:25.332 10:50:14 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:25.332 10:50:14 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:25.332 10:50:14 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:25.332 10:50:14 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:25.332 10:50:14 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:25.332 10:50:14 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:25.332 10:50:14 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:25.333 10:50:14 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:25.333 10:50:14 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:25.333 10:50:14 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:25.333 #define SPDK_CONFIG_H 00:13:25.333 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:25.333 #define SPDK_CONFIG_APPS 1 00:13:25.333 #define SPDK_CONFIG_ARCH native 00:13:25.333 #define SPDK_CONFIG_ASAN 1 00:13:25.333 #undef SPDK_CONFIG_AVAHI 00:13:25.333 #undef SPDK_CONFIG_CET 00:13:25.333 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:25.333 #define SPDK_CONFIG_COVERAGE 1 00:13:25.333 #define SPDK_CONFIG_CROSS_PREFIX 00:13:25.333 #undef SPDK_CONFIG_CRYPTO 00:13:25.333 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:25.333 #undef SPDK_CONFIG_CUSTOMOCF 00:13:25.333 #undef SPDK_CONFIG_DAOS 00:13:25.333 #define SPDK_CONFIG_DAOS_DIR 00:13:25.333 #define SPDK_CONFIG_DEBUG 1 00:13:25.333 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:25.333 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:25.333 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:25.333 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:25.333 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:25.333 #undef SPDK_CONFIG_DPDK_UADK 00:13:25.333 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:25.333 #define SPDK_CONFIG_EXAMPLES 1 00:13:25.333 #undef SPDK_CONFIG_FC 00:13:25.333 #define SPDK_CONFIG_FC_PATH 00:13:25.333 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:25.333 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:25.333 #define SPDK_CONFIG_FSDEV 1 00:13:25.333 #undef SPDK_CONFIG_FUSE 00:13:25.333 #undef SPDK_CONFIG_FUZZER 00:13:25.333 #define SPDK_CONFIG_FUZZER_LIB 00:13:25.333 #undef SPDK_CONFIG_GOLANG 00:13:25.333 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:25.333 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:25.333 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:25.333 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:25.333 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:25.333 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:25.333 #undef SPDK_CONFIG_HAVE_LZ4 00:13:25.333 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:25.333 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:25.333 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:25.333 #define SPDK_CONFIG_IDXD 1 00:13:25.333 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:25.333 #undef SPDK_CONFIG_IPSEC_MB 00:13:25.333 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:25.333 #define SPDK_CONFIG_ISAL 1 00:13:25.333 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:25.333 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:25.333 #define SPDK_CONFIG_LIBDIR 00:13:25.333 #undef SPDK_CONFIG_LTO 00:13:25.333 #define SPDK_CONFIG_MAX_LCORES 128 00:13:25.333 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:25.333 #define SPDK_CONFIG_NVME_CUSE 1 00:13:25.333 #undef SPDK_CONFIG_OCF 00:13:25.334 #define SPDK_CONFIG_OCF_PATH 00:13:25.334 #define SPDK_CONFIG_OPENSSL_PATH 00:13:25.334 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:25.334 #define SPDK_CONFIG_PGO_DIR 00:13:25.334 #undef SPDK_CONFIG_PGO_USE 00:13:25.334 #define SPDK_CONFIG_PREFIX /usr/local 00:13:25.334 #undef SPDK_CONFIG_RAID5F 00:13:25.334 #undef SPDK_CONFIG_RBD 00:13:25.334 #define SPDK_CONFIG_RDMA 1 00:13:25.334 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:25.334 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:25.334 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:25.334 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:25.334 #define SPDK_CONFIG_SHARED 1 00:13:25.334 #undef SPDK_CONFIG_SMA 00:13:25.334 #define SPDK_CONFIG_TESTS 1 00:13:25.334 #undef SPDK_CONFIG_TSAN 00:13:25.334 #define SPDK_CONFIG_UBLK 1 00:13:25.334 #define SPDK_CONFIG_UBSAN 1 00:13:25.334 #undef SPDK_CONFIG_UNIT_TESTS 00:13:25.334 #undef SPDK_CONFIG_URING 00:13:25.334 #define SPDK_CONFIG_URING_PATH 00:13:25.334 #undef SPDK_CONFIG_URING_ZNS 00:13:25.334 #undef SPDK_CONFIG_USDT 00:13:25.334 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:25.334 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:25.334 #undef SPDK_CONFIG_VFIO_USER 00:13:25.334 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:25.334 #define SPDK_CONFIG_VHOST 1 00:13:25.334 #define SPDK_CONFIG_VIRTIO 1 00:13:25.334 #undef SPDK_CONFIG_VTUNE 00:13:25.334 #define SPDK_CONFIG_VTUNE_DIR 00:13:25.334 #define SPDK_CONFIG_WERROR 1 00:13:25.334 #define SPDK_CONFIG_WPDK_DIR 00:13:25.334 #define SPDK_CONFIG_XNVME 1 00:13:25.334 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:25.334 10:50:14 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.334 10:50:14 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.334 10:50:14 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.334 10:50:14 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.334 10:50:14 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.334 10:50:14 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.334 10:50:14 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.334 10:50:14 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.334 10:50:14 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:25.334 10:50:14 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:25.334 10:50:14 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:25.334 10:50:14 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:25.335 10:50:14 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69830 ]] 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69830 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.CZqAjm 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.CZqAjm/tests/xnvme /tmp/spdk.CZqAjm 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13961678848 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5606043648 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261665792 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13961678848 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5606043648 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96381296640 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3321483264 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:25.336 * Looking for test storage... 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:25.336 10:50:14 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13961678848 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:25.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.596 10:50:14 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.596 10:50:14 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:25.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.596 --rc genhtml_branch_coverage=1 00:13:25.596 --rc genhtml_function_coverage=1 00:13:25.597 --rc genhtml_legend=1 00:13:25.597 --rc geninfo_all_blocks=1 00:13:25.597 --rc geninfo_unexecuted_blocks=1 00:13:25.597 00:13:25.597 ' 00:13:25.597 10:50:14 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:25.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.597 --rc genhtml_branch_coverage=1 00:13:25.597 --rc genhtml_function_coverage=1 00:13:25.597 --rc genhtml_legend=1 00:13:25.597 --rc geninfo_all_blocks=1 00:13:25.597 --rc geninfo_unexecuted_blocks=1 00:13:25.597 00:13:25.597 ' 00:13:25.597 10:50:14 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:25.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.597 --rc genhtml_branch_coverage=1 00:13:25.597 --rc genhtml_function_coverage=1 00:13:25.597 --rc genhtml_legend=1 00:13:25.597 --rc geninfo_all_blocks=1 00:13:25.597 --rc geninfo_unexecuted_blocks=1 00:13:25.597 00:13:25.597 ' 00:13:25.597 10:50:14 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:25.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.597 --rc genhtml_branch_coverage=1 00:13:25.597 --rc genhtml_function_coverage=1 00:13:25.597 --rc genhtml_legend=1 00:13:25.597 --rc geninfo_all_blocks=1 00:13:25.597 --rc geninfo_unexecuted_blocks=1 00:13:25.597 00:13:25.597 ' 00:13:25.597 10:50:14 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.597 10:50:14 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.597 10:50:14 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.597 10:50:14 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.597 10:50:14 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.597 10:50:14 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.597 10:50:14 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.597 10:50:14 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.597 10:50:14 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:25.597 10:50:14 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:25.597 10:50:14 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:26.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:26.423 Waiting for block devices as requested 00:13:26.423 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.682 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.682 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:26.682 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:31.952 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:31.952 10:50:21 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:32.211 10:50:21 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:32.211 10:50:21 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:32.470 10:50:21 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:32.470 10:50:21 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:32.470 10:50:21 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:32.470 10:50:21 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:32.470 10:50:21 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:32.729 No valid GPT data, bailing 00:13:32.729 10:50:21 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:32.729 10:50:21 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:32.729 10:50:21 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:32.729 10:50:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:32.729 10:50:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:32.729 10:50:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.729 10:50:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.729 ************************************ 00:13:32.729 START TEST xnvme_rpc 00:13:32.729 ************************************ 00:13:32.729 10:50:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:32.729 10:50:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:32.729 10:50:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:32.729 10:50:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:32.729 10:50:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:32.729 10:50:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70226 00:13:32.729 10:50:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70226 00:13:32.730 10:50:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70226 ']' 00:13:32.730 10:50:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:32.730 10:50:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.730 10:50:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.730 10:50:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.730 10:50:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.730 10:50:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.730 [2024-11-20 10:50:21.873391] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:13:32.730 [2024-11-20 10:50:21.873744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70226 ] 00:13:32.988 [2024-11-20 10:50:22.052755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.988 [2024-11-20 10:50:22.165836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.925 xnvme_bdev 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.925 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70226 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70226 ']' 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70226 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70226 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.185 killing process with pid 70226 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70226' 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70226 00:13:34.185 10:50:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70226 00:13:36.808 00:13:36.808 real 0m3.908s 00:13:36.808 user 0m3.944s 00:13:36.808 sys 0m0.537s 00:13:36.808 10:50:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.808 ************************************ 00:13:36.808 END TEST xnvme_rpc 00:13:36.808 ************************************ 00:13:36.808 10:50:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.808 10:50:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:36.808 10:50:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:36.808 10:50:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.808 10:50:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:36.808 ************************************ 00:13:36.808 START TEST xnvme_bdevperf 00:13:36.808 ************************************ 00:13:36.808 10:50:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:36.808 10:50:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:36.808 10:50:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:36.808 10:50:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:36.808 10:50:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:36.808 10:50:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:36.808 10:50:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:36.808 10:50:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:36.808 { 00:13:36.808 "subsystems": [ 00:13:36.808 { 00:13:36.808 "subsystem": "bdev", 00:13:36.808 "config": [ 00:13:36.808 { 00:13:36.808 "params": { 00:13:36.808 "io_mechanism": "libaio", 00:13:36.808 "conserve_cpu": false, 00:13:36.808 "filename": "/dev/nvme0n1", 00:13:36.808 "name": "xnvme_bdev" 00:13:36.808 }, 00:13:36.808 "method": "bdev_xnvme_create" 00:13:36.808 }, 00:13:36.808 { 00:13:36.808 "method": "bdev_wait_for_examine" 00:13:36.808 } 00:13:36.808 ] 00:13:36.808 } 00:13:36.808 ] 00:13:36.808 } 00:13:36.808 [2024-11-20 10:50:25.837162] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:13:36.808 [2024-11-20 10:50:25.837282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70311 ] 00:13:36.808 [2024-11-20 10:50:26.019114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.067 [2024-11-20 10:50:26.128083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.326 Running I/O for 5 seconds... 00:13:39.642 45179.00 IOPS, 176.48 MiB/s [2024-11-20T10:50:29.832Z] 43228.00 IOPS, 168.86 MiB/s [2024-11-20T10:50:30.767Z] 41441.00 IOPS, 161.88 MiB/s [2024-11-20T10:50:31.704Z] 41141.00 IOPS, 160.71 MiB/s 00:13:42.451 Latency(us) 00:13:42.451 [2024-11-20T10:50:31.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.451 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:42.451 xnvme_bdev : 5.00 42229.76 164.96 0.00 0.00 1512.55 160.39 6079.85 00:13:42.451 [2024-11-20T10:50:31.704Z] =================================================================================================================== 00:13:42.451 [2024-11-20T10:50:31.704Z] Total : 42229.76 164.96 0.00 0.00 1512.55 160.39 6079.85 00:13:43.385 10:50:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.385 10:50:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:43.385 10:50:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:43.385 10:50:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:43.385 10:50:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:43.644 { 00:13:43.644 "subsystems": [ 00:13:43.644 { 00:13:43.644 "subsystem": "bdev", 00:13:43.644 "config": [ 00:13:43.644 { 00:13:43.644 "params": { 00:13:43.644 "io_mechanism": "libaio", 00:13:43.644 "conserve_cpu": false, 00:13:43.644 "filename": "/dev/nvme0n1", 00:13:43.644 "name": "xnvme_bdev" 00:13:43.644 }, 00:13:43.644 "method": "bdev_xnvme_create" 00:13:43.644 }, 00:13:43.644 { 00:13:43.644 "method": "bdev_wait_for_examine" 00:13:43.644 } 00:13:43.644 ] 00:13:43.644 } 00:13:43.644 ] 00:13:43.644 } 00:13:43.644 [2024-11-20 10:50:32.693409] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:13:43.644 [2024-11-20 10:50:32.693827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70392 ] 00:13:43.644 [2024-11-20 10:50:32.873443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.903 [2024-11-20 10:50:32.981957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.162 Running I/O for 5 seconds... 00:13:46.469 47136.00 IOPS, 184.12 MiB/s [2024-11-20T10:50:36.657Z] 46861.50 IOPS, 183.05 MiB/s [2024-11-20T10:50:37.592Z] 46922.33 IOPS, 183.29 MiB/s [2024-11-20T10:50:38.530Z] 47383.50 IOPS, 185.09 MiB/s [2024-11-20T10:50:38.530Z] 47315.00 IOPS, 184.82 MiB/s 00:13:49.277 Latency(us) 00:13:49.277 [2024-11-20T10:50:38.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.277 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:49.277 xnvme_bdev : 5.00 47290.44 184.73 0.00 0.00 1350.33 215.49 2934.64 00:13:49.277 [2024-11-20T10:50:38.530Z] =================================================================================================================== 00:13:49.277 [2024-11-20T10:50:38.530Z] Total : 47290.44 184.73 0.00 0.00 1350.33 215.49 2934.64 00:13:50.213 00:13:50.213 real 0m13.702s 00:13:50.213 user 0m4.927s 00:13:50.213 sys 0m6.218s 00:13:50.213 10:50:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.213 10:50:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.213 ************************************ 00:13:50.213 END TEST xnvme_bdevperf 00:13:50.213 ************************************ 00:13:50.471 10:50:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:50.471 10:50:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.471 10:50:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.471 10:50:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.471 ************************************ 00:13:50.471 START TEST xnvme_fio_plugin 00:13:50.471 ************************************ 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:50.471 10:50:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.471 { 00:13:50.471 "subsystems": [ 00:13:50.471 { 00:13:50.471 "subsystem": "bdev", 00:13:50.471 "config": [ 00:13:50.471 { 00:13:50.471 "params": { 00:13:50.471 "io_mechanism": "libaio", 00:13:50.471 "conserve_cpu": false, 00:13:50.471 "filename": "/dev/nvme0n1", 00:13:50.471 "name": "xnvme_bdev" 00:13:50.471 }, 00:13:50.471 "method": "bdev_xnvme_create" 00:13:50.471 }, 00:13:50.471 { 00:13:50.471 "method": "bdev_wait_for_examine" 00:13:50.472 } 00:13:50.472 ] 00:13:50.472 } 00:13:50.472 ] 00:13:50.472 } 00:13:50.731 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:50.731 fio-3.35 00:13:50.731 Starting 1 thread 00:13:57.339 00:13:57.339 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70513: Wed Nov 20 10:50:45 2024 00:13:57.339 read: IOPS=51.1k, BW=200MiB/s (209MB/s)(998MiB/5001msec) 00:13:57.339 slat (usec): min=4, max=1022, avg=17.18, stdev=25.51 00:13:57.339 clat (usec): min=63, max=5786, avg=738.02, stdev=450.17 00:13:57.339 lat (usec): min=68, max=5857, avg=755.19, stdev=452.87 00:13:57.339 clat percentiles (usec): 00:13:57.339 | 1.00th=[ 153], 5.00th=[ 231], 10.00th=[ 293], 20.00th=[ 392], 00:13:57.339 | 30.00th=[ 482], 40.00th=[ 578], 50.00th=[ 668], 60.00th=[ 766], 00:13:57.339 | 70.00th=[ 873], 80.00th=[ 996], 90.00th=[ 1188], 95.00th=[ 1401], 00:13:57.339 | 99.00th=[ 2573], 99.50th=[ 3163], 99.90th=[ 4047], 99.95th=[ 4293], 00:13:57.339 | 99.99th=[ 4948] 00:13:57.339 bw ( KiB/s): min=163976, max=255064, per=97.69%, avg=199693.33, stdev=28022.46, samples=9 00:13:57.339 iops : min=40994, max=63766, avg=49923.33, stdev=7005.62, samples=9 00:13:57.339 lat (usec) : 100=0.09%, 250=6.41%, 500=25.26%, 750=26.59%, 1000=21.96% 00:13:57.339 lat (msec) : 2=17.74%, 4=1.84%, 10=0.11% 00:13:57.339 cpu : usr=25.86%, sys=55.94%, ctx=216, majf=0, minf=764 00:13:57.339 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=10.6%, 16=26.1%, 32=56.6%, >=64=1.8% 00:13:57.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.339 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:13:57.339 issued rwts: total=255568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:57.339 00:13:57.339 Run status group 0 (all jobs): 00:13:57.339 READ: bw=200MiB/s (209MB/s), 200MiB/s-200MiB/s (209MB/s-209MB/s), io=998MiB (1047MB), run=5001-5001msec 00:13:57.907 ----------------------------------------------------- 00:13:57.907 Suppressions used: 00:13:57.907 count bytes template 00:13:57.907 1 11 /usr/src/fio/parse.c 00:13:57.907 1 8 libtcmalloc_minimal.so 00:13:57.907 1 904 libcrypto.so 00:13:57.907 ----------------------------------------------------- 00:13:57.907 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:57.907 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:57.908 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:57.908 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:57.908 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:57.908 10:50:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:57.908 { 00:13:57.908 "subsystems": [ 00:13:57.908 { 00:13:57.908 "subsystem": "bdev", 00:13:57.908 "config": [ 00:13:57.908 { 00:13:57.908 "params": { 00:13:57.908 "io_mechanism": "libaio", 00:13:57.908 "conserve_cpu": false, 00:13:57.908 "filename": "/dev/nvme0n1", 00:13:57.908 "name": "xnvme_bdev" 00:13:57.908 }, 00:13:57.908 "method": "bdev_xnvme_create" 00:13:57.908 }, 00:13:57.908 { 00:13:57.908 "method": "bdev_wait_for_examine" 00:13:57.908 } 00:13:57.908 ] 00:13:57.908 } 00:13:57.908 ] 00:13:57.908 } 00:13:57.908 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:57.908 fio-3.35 00:13:57.908 Starting 1 thread 00:14:04.475 00:14:04.475 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70607: Wed Nov 20 10:50:52 2024 00:14:04.475 write: IOPS=53.5k, BW=209MiB/s (219MB/s)(1046MiB/5001msec); 0 zone resets 00:14:04.475 slat (usec): min=4, max=1129, avg=16.15, stdev=26.10 00:14:04.475 clat (usec): min=32, max=22669, avg=716.02, stdev=501.11 00:14:04.475 lat (usec): min=104, max=22673, avg=732.17, stdev=501.53 00:14:04.475 clat percentiles (usec): 00:14:04.475 | 1.00th=[ 161], 5.00th=[ 237], 10.00th=[ 289], 20.00th=[ 396], 00:14:04.475 | 30.00th=[ 494], 40.00th=[ 586], 50.00th=[ 676], 60.00th=[ 766], 00:14:04.475 | 70.00th=[ 857], 80.00th=[ 971], 90.00th=[ 1123], 95.00th=[ 1254], 00:14:04.475 | 99.00th=[ 2147], 99.50th=[ 2671], 99.90th=[ 3752], 99.95th=[ 4146], 00:14:04.475 | 99.99th=[22152] 00:14:04.475 bw ( KiB/s): min=195352, max=219480, per=98.70%, avg=211384.33, stdev=8152.98, samples=9 00:14:04.475 iops : min=48838, max=54870, avg=52846.00, stdev=2038.17, samples=9 00:14:04.475 lat (usec) : 50=0.01%, 100=0.09%, 250=6.33%, 500=24.42%, 750=27.62% 00:14:04.475 lat (usec) : 1000=24.06% 00:14:04.475 lat (msec) : 2=16.27%, 4=1.14%, 10=0.03%, 20=0.01%, 50=0.02% 00:14:04.475 cpu : usr=28.80%, sys=56.84%, ctx=78, majf=0, minf=764 00:14:04.475 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=10.6%, 16=25.9%, 32=57.1%, >=64=1.8% 00:14:04.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:04.475 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:04.475 issued rwts: total=0,267753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:04.475 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:04.475 00:14:04.475 Run status group 0 (all jobs): 00:14:04.475 WRITE: bw=209MiB/s (219MB/s), 209MiB/s-209MiB/s (219MB/s-219MB/s), io=1046MiB (1097MB), run=5001-5001msec 00:14:05.043 ----------------------------------------------------- 00:14:05.043 Suppressions used: 00:14:05.043 count bytes template 00:14:05.043 1 11 /usr/src/fio/parse.c 00:14:05.043 1 8 libtcmalloc_minimal.so 00:14:05.043 1 904 libcrypto.so 00:14:05.043 ----------------------------------------------------- 00:14:05.043 00:14:05.303 00:14:05.303 real 0m14.784s 00:14:05.303 user 0m6.435s 00:14:05.303 sys 0m6.401s 00:14:05.303 10:50:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.303 10:50:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 ************************************ 00:14:05.303 END TEST xnvme_fio_plugin 00:14:05.303 ************************************ 00:14:05.303 10:50:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:05.303 10:50:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:05.303 10:50:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:05.303 10:50:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:05.303 10:50:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.303 10:50:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.303 10:50:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 ************************************ 00:14:05.303 START TEST xnvme_rpc 00:14:05.303 ************************************ 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70701 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70701 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70701 ']' 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.303 10:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.303 [2024-11-20 10:50:54.493307] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:14:05.303 [2024-11-20 10:50:54.493636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70701 ] 00:14:05.562 [2024-11-20 10:50:54.675680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.562 [2024-11-20 10:50:54.783410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.498 xnvme_bdev 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.498 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70701 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70701 ']' 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70701 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70701 00:14:06.758 killing process with pid 70701 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70701' 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70701 00:14:06.758 10:50:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70701 00:14:09.293 00:14:09.293 real 0m3.896s 00:14:09.293 user 0m3.961s 00:14:09.293 sys 0m0.551s 00:14:09.293 10:50:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.293 10:50:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 ************************************ 00:14:09.293 END TEST xnvme_rpc 00:14:09.293 ************************************ 00:14:09.293 10:50:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:09.293 10:50:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:09.293 10:50:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.293 10:50:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 ************************************ 00:14:09.293 START TEST xnvme_bdevperf 00:14:09.293 ************************************ 00:14:09.293 10:50:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:09.293 10:50:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:09.293 10:50:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:09.293 10:50:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:09.293 10:50:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:09.293 10:50:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:09.293 10:50:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:09.293 10:50:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 { 00:14:09.293 "subsystems": [ 00:14:09.293 { 00:14:09.293 "subsystem": "bdev", 00:14:09.293 "config": [ 00:14:09.293 { 00:14:09.293 "params": { 00:14:09.293 "io_mechanism": "libaio", 00:14:09.293 "conserve_cpu": true, 00:14:09.293 "filename": "/dev/nvme0n1", 00:14:09.293 "name": "xnvme_bdev" 00:14:09.293 }, 00:14:09.293 "method": "bdev_xnvme_create" 00:14:09.293 }, 00:14:09.293 { 00:14:09.293 "method": "bdev_wait_for_examine" 00:14:09.293 } 00:14:09.293 ] 00:14:09.293 } 00:14:09.293 ] 00:14:09.293 } 00:14:09.293 [2024-11-20 10:50:58.445386] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:14:09.293 [2024-11-20 10:50:58.445517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70786 ] 00:14:09.551 [2024-11-20 10:50:58.627188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.551 [2024-11-20 10:50:58.735907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.118 Running I/O for 5 seconds... 00:14:11.990 45586.00 IOPS, 178.07 MiB/s [2024-11-20T10:51:02.180Z] 44521.50 IOPS, 173.91 MiB/s [2024-11-20T10:51:03.117Z] 44075.67 IOPS, 172.17 MiB/s [2024-11-20T10:51:04.494Z] 42500.50 IOPS, 166.02 MiB/s 00:14:15.241 Latency(us) 00:14:15.241 [2024-11-20T10:51:04.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.241 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:15.241 xnvme_bdev : 5.00 43036.25 168.11 0.00 0.00 1483.48 186.71 7527.43 00:14:15.241 [2024-11-20T10:51:04.494Z] =================================================================================================================== 00:14:15.241 [2024-11-20T10:51:04.494Z] Total : 43036.25 168.11 0.00 0.00 1483.48 186.71 7527.43 00:14:16.182 10:51:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:16.182 10:51:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:16.182 10:51:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:16.182 10:51:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:16.182 10:51:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:16.182 { 00:14:16.182 "subsystems": [ 00:14:16.182 { 00:14:16.182 "subsystem": "bdev", 00:14:16.182 "config": [ 00:14:16.182 { 00:14:16.182 "params": { 00:14:16.182 "io_mechanism": "libaio", 00:14:16.182 "conserve_cpu": true, 00:14:16.182 "filename": "/dev/nvme0n1", 00:14:16.182 "name": "xnvme_bdev" 00:14:16.182 }, 00:14:16.182 "method": "bdev_xnvme_create" 00:14:16.182 }, 00:14:16.182 { 00:14:16.182 "method": "bdev_wait_for_examine" 00:14:16.182 } 00:14:16.182 ] 00:14:16.182 } 00:14:16.182 ] 00:14:16.182 } 00:14:16.182 [2024-11-20 10:51:05.299235] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:14:16.182 [2024-11-20 10:51:05.299356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70862 ] 00:14:16.442 [2024-11-20 10:51:05.468006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.442 [2024-11-20 10:51:05.574683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.700 Running I/O for 5 seconds... 00:14:19.069 37823.00 IOPS, 147.75 MiB/s [2024-11-20T10:51:09.258Z] 37519.50 IOPS, 146.56 MiB/s [2024-11-20T10:51:10.202Z] 36780.00 IOPS, 143.67 MiB/s [2024-11-20T10:51:11.137Z] 36466.00 IOPS, 142.45 MiB/s 00:14:21.884 Latency(us) 00:14:21.884 [2024-11-20T10:51:11.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.884 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:21.884 xnvme_bdev : 5.00 36552.85 142.78 0.00 0.00 1747.57 92.12 5158.66 00:14:21.884 [2024-11-20T10:51:11.137Z] =================================================================================================================== 00:14:21.884 [2024-11-20T10:51:11.137Z] Total : 36552.85 142.78 0.00 0.00 1747.57 92.12 5158.66 00:14:22.820 00:14:22.820 real 0m13.662s 00:14:22.820 user 0m4.830s 00:14:22.820 sys 0m6.379s 00:14:22.820 ************************************ 00:14:22.821 END TEST xnvme_bdevperf 00:14:22.821 ************************************ 00:14:22.821 10:51:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.821 10:51:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:22.821 10:51:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:22.821 10:51:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:22.821 10:51:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.821 10:51:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.079 ************************************ 00:14:23.079 START TEST xnvme_fio_plugin 00:14:23.079 ************************************ 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:23.079 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:23.080 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:23.080 10:51:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.080 { 00:14:23.080 "subsystems": [ 00:14:23.080 { 00:14:23.080 "subsystem": "bdev", 00:14:23.080 "config": [ 00:14:23.080 { 00:14:23.080 "params": { 00:14:23.080 "io_mechanism": "libaio", 00:14:23.080 "conserve_cpu": true, 00:14:23.080 "filename": "/dev/nvme0n1", 00:14:23.080 "name": "xnvme_bdev" 00:14:23.080 }, 00:14:23.080 "method": "bdev_xnvme_create" 00:14:23.080 }, 00:14:23.080 { 00:14:23.080 "method": "bdev_wait_for_examine" 00:14:23.080 } 00:14:23.080 ] 00:14:23.080 } 00:14:23.080 ] 00:14:23.080 } 00:14:23.338 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:23.338 fio-3.35 00:14:23.338 Starting 1 thread 00:14:29.899 00:14:29.899 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70987: Wed Nov 20 10:51:18 2024 00:14:29.899 read: IOPS=55.3k, BW=216MiB/s (226MB/s)(1080MiB/5001msec) 00:14:29.899 slat (usec): min=4, max=1084, avg=15.49, stdev=32.90 00:14:29.899 clat (usec): min=84, max=5543, avg=712.67, stdev=412.48 00:14:29.899 lat (usec): min=136, max=5649, avg=728.17, stdev=414.77 00:14:29.899 clat percentiles (usec): 00:14:29.899 | 1.00th=[ 169], 5.00th=[ 273], 10.00th=[ 343], 20.00th=[ 441], 00:14:29.899 | 30.00th=[ 506], 40.00th=[ 562], 50.00th=[ 627], 60.00th=[ 693], 00:14:29.899 | 70.00th=[ 775], 80.00th=[ 914], 90.00th=[ 1172], 95.00th=[ 1385], 00:14:29.899 | 99.00th=[ 2343], 99.50th=[ 3032], 99.90th=[ 4146], 99.95th=[ 4359], 00:14:29.899 | 99.99th=[ 4883] 00:14:29.899 bw ( KiB/s): min=148752, max=302736, per=100.00%, avg=226884.44, stdev=65091.38, samples=9 00:14:29.899 iops : min=37188, max=75684, avg=56721.11, stdev=16272.85, samples=9 00:14:29.899 lat (usec) : 100=0.08%, 250=3.77%, 500=24.91%, 750=38.43%, 1000=16.55% 00:14:29.899 lat (msec) : 2=14.84%, 4=1.28%, 10=0.14% 00:14:29.899 cpu : usr=30.14%, sys=55.54%, ctx=33, majf=0, minf=764 00:14:29.899 IO depths : 1=0.2%, 2=0.8%, 4=3.2%, 8=9.1%, 16=23.4%, 32=61.3%, >=64=2.1% 00:14:29.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.899 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:14:29.899 issued rwts: total=276457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.899 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:29.899 00:14:29.899 Run status group 0 (all jobs): 00:14:29.899 READ: bw=216MiB/s (226MB/s), 216MiB/s-216MiB/s (226MB/s-226MB/s), io=1080MiB (1132MB), run=5001-5001msec 00:14:30.157 ----------------------------------------------------- 00:14:30.157 Suppressions used: 00:14:30.157 count bytes template 00:14:30.157 1 11 /usr/src/fio/parse.c 00:14:30.157 1 8 libtcmalloc_minimal.so 00:14:30.157 1 904 libcrypto.so 00:14:30.157 ----------------------------------------------------- 00:14:30.157 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:30.415 10:51:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:30.415 { 00:14:30.415 "subsystems": [ 00:14:30.415 { 00:14:30.415 "subsystem": "bdev", 00:14:30.415 "config": [ 00:14:30.415 { 00:14:30.415 "params": { 00:14:30.415 "io_mechanism": "libaio", 00:14:30.415 "conserve_cpu": true, 00:14:30.415 "filename": "/dev/nvme0n1", 00:14:30.415 "name": "xnvme_bdev" 00:14:30.415 }, 00:14:30.415 "method": "bdev_xnvme_create" 00:14:30.415 }, 00:14:30.415 { 00:14:30.415 "method": "bdev_wait_for_examine" 00:14:30.415 } 00:14:30.415 ] 00:14:30.415 } 00:14:30.415 ] 00:14:30.415 } 00:14:30.674 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:30.674 fio-3.35 00:14:30.674 Starting 1 thread 00:14:37.240 00:14:37.240 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71080: Wed Nov 20 10:51:25 2024 00:14:37.240 write: IOPS=50.7k, BW=198MiB/s (208MB/s)(991MiB/5001msec); 0 zone resets 00:14:37.240 slat (usec): min=4, max=1293, avg=17.16, stdev=34.17 00:14:37.240 clat (usec): min=49, max=6146, avg=767.39, stdev=447.90 00:14:37.240 lat (usec): min=110, max=6250, avg=784.54, stdev=450.04 00:14:37.240 clat percentiles (usec): 00:14:37.240 | 1.00th=[ 176], 5.00th=[ 273], 10.00th=[ 343], 20.00th=[ 445], 00:14:37.240 | 30.00th=[ 529], 40.00th=[ 603], 50.00th=[ 685], 60.00th=[ 766], 00:14:37.240 | 70.00th=[ 873], 80.00th=[ 1020], 90.00th=[ 1254], 95.00th=[ 1450], 00:14:37.240 | 99.00th=[ 2606], 99.50th=[ 3195], 99.90th=[ 4293], 99.95th=[ 4621], 00:14:37.240 | 99.99th=[ 5211] 00:14:37.240 bw ( KiB/s): min=146384, max=268072, per=100.00%, avg=203710.22, stdev=52402.84, samples=9 00:14:37.240 iops : min=36596, max=67018, avg=50927.56, stdev=13100.71, samples=9 00:14:37.240 lat (usec) : 50=0.01%, 100=0.08%, 250=3.69%, 500=22.62%, 750=31.39% 00:14:37.240 lat (usec) : 1000=21.28% 00:14:37.240 lat (msec) : 2=19.18%, 4=1.59%, 10=0.17% 00:14:37.240 cpu : usr=29.60%, sys=55.50%, ctx=104, majf=0, minf=764 00:14:37.240 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=9.6%, 16=24.4%, 32=59.8%, >=64=2.0% 00:14:37.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.240 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:14:37.240 issued rwts: total=0,253669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.240 00:14:37.240 Run status group 0 (all jobs): 00:14:37.240 WRITE: bw=198MiB/s (208MB/s), 198MiB/s-198MiB/s (208MB/s-208MB/s), io=991MiB (1039MB), run=5001-5001msec 00:14:37.808 ----------------------------------------------------- 00:14:37.808 Suppressions used: 00:14:37.808 count bytes template 00:14:37.808 1 11 /usr/src/fio/parse.c 00:14:37.808 1 8 libtcmalloc_minimal.so 00:14:37.808 1 904 libcrypto.so 00:14:37.808 ----------------------------------------------------- 00:14:37.808 00:14:37.808 00:14:37.808 real 0m14.711s 00:14:37.808 user 0m6.612s 00:14:37.808 sys 0m6.321s 00:14:37.808 10:51:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.808 10:51:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:37.808 ************************************ 00:14:37.808 END TEST xnvme_fio_plugin 00:14:37.808 ************************************ 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:37.808 10:51:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:37.808 10:51:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:37.808 10:51:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.808 10:51:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.808 ************************************ 00:14:37.808 START TEST xnvme_rpc 00:14:37.808 ************************************ 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71166 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71166 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71166 ']' 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.808 10:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.808 [2024-11-20 10:51:26.994524] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:14:37.808 [2024-11-20 10:51:26.994670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71166 ] 00:14:38.067 [2024-11-20 10:51:27.178296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.067 [2024-11-20 10:51:27.292433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.003 xnvme_bdev 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.003 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71166 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71166 ']' 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71166 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71166 00:14:39.291 killing process with pid 71166 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71166' 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71166 00:14:39.291 10:51:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71166 00:14:41.882 00:14:41.882 real 0m3.830s 00:14:41.882 user 0m3.898s 00:14:41.882 sys 0m0.532s 00:14:41.882 ************************************ 00:14:41.882 END TEST xnvme_rpc 00:14:41.882 ************************************ 00:14:41.882 10:51:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.882 10:51:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.882 10:51:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:41.882 10:51:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:41.882 10:51:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.882 10:51:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.882 ************************************ 00:14:41.882 START TEST xnvme_bdevperf 00:14:41.882 ************************************ 00:14:41.882 10:51:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:41.882 10:51:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:41.882 10:51:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:41.882 10:51:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.882 10:51:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:41.882 10:51:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:41.882 10:51:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:41.882 10:51:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:41.882 { 00:14:41.882 "subsystems": [ 00:14:41.882 { 00:14:41.882 "subsystem": "bdev", 00:14:41.882 "config": [ 00:14:41.882 { 00:14:41.882 "params": { 00:14:41.882 "io_mechanism": "io_uring", 00:14:41.882 "conserve_cpu": false, 00:14:41.882 "filename": "/dev/nvme0n1", 00:14:41.882 "name": "xnvme_bdev" 00:14:41.882 }, 00:14:41.882 "method": "bdev_xnvme_create" 00:14:41.882 }, 00:14:41.882 { 00:14:41.882 "method": "bdev_wait_for_examine" 00:14:41.882 } 00:14:41.882 ] 00:14:41.882 } 00:14:41.882 ] 00:14:41.882 } 00:14:41.882 [2024-11-20 10:51:30.874075] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:14:41.882 [2024-11-20 10:51:30.874202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71251 ] 00:14:41.882 [2024-11-20 10:51:31.054659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.141 [2024-11-20 10:51:31.165908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.400 Running I/O for 5 seconds... 00:14:44.273 39157.00 IOPS, 152.96 MiB/s [2024-11-20T10:51:34.905Z] 36387.00 IOPS, 142.14 MiB/s [2024-11-20T10:51:35.843Z] 37016.33 IOPS, 144.60 MiB/s [2024-11-20T10:51:36.779Z] 37946.75 IOPS, 148.23 MiB/s [2024-11-20T10:51:36.779Z] 39188.60 IOPS, 153.08 MiB/s 00:14:47.526 Latency(us) 00:14:47.526 [2024-11-20T10:51:36.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.526 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:47.526 xnvme_bdev : 5.00 39162.80 152.98 0.00 0.00 1629.79 378.35 9475.08 00:14:47.526 [2024-11-20T10:51:36.779Z] =================================================================================================================== 00:14:47.526 [2024-11-20T10:51:36.779Z] Total : 39162.80 152.98 0.00 0.00 1629.79 378.35 9475.08 00:14:48.463 10:51:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.463 10:51:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:48.463 10:51:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.463 10:51:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.463 10:51:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.463 { 00:14:48.463 "subsystems": [ 00:14:48.463 { 00:14:48.463 "subsystem": "bdev", 00:14:48.463 "config": [ 00:14:48.463 { 00:14:48.463 "params": { 00:14:48.463 "io_mechanism": "io_uring", 00:14:48.463 "conserve_cpu": false, 00:14:48.463 "filename": "/dev/nvme0n1", 00:14:48.463 "name": "xnvme_bdev" 00:14:48.463 }, 00:14:48.463 "method": "bdev_xnvme_create" 00:14:48.463 }, 00:14:48.463 { 00:14:48.463 "method": "bdev_wait_for_examine" 00:14:48.463 } 00:14:48.463 ] 00:14:48.463 } 00:14:48.463 ] 00:14:48.463 } 00:14:48.463 [2024-11-20 10:51:37.675174] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:14:48.463 [2024-11-20 10:51:37.675305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71332 ] 00:14:48.722 [2024-11-20 10:51:37.856032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.722 [2024-11-20 10:51:37.965340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.289 Running I/O for 5 seconds... 00:14:51.158 29888.00 IOPS, 116.75 MiB/s [2024-11-20T10:51:41.347Z] 30656.00 IOPS, 119.75 MiB/s [2024-11-20T10:51:42.283Z] 30634.67 IOPS, 119.67 MiB/s [2024-11-20T10:51:43.680Z] 30186.75 IOPS, 117.92 MiB/s 00:14:54.427 Latency(us) 00:14:54.427 [2024-11-20T10:51:43.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.427 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:54.427 xnvme_bdev : 5.00 30148.03 117.77 0.00 0.00 2116.82 1546.28 5790.33 00:14:54.427 [2024-11-20T10:51:43.680Z] =================================================================================================================== 00:14:54.427 [2024-11-20T10:51:43.680Z] Total : 30148.03 117.77 0.00 0.00 2116.82 1546.28 5790.33 00:14:55.364 00:14:55.364 real 0m13.569s 00:14:55.364 user 0m5.966s 00:14:55.364 sys 0m7.399s 00:14:55.364 ************************************ 00:14:55.364 END TEST xnvme_bdevperf 00:14:55.364 ************************************ 00:14:55.364 10:51:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.364 10:51:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 10:51:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:55.364 10:51:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:55.364 10:51:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.364 10:51:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 ************************************ 00:14:55.364 START TEST xnvme_fio_plugin 00:14:55.364 ************************************ 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:55.364 10:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.364 { 00:14:55.364 "subsystems": [ 00:14:55.364 { 00:14:55.364 "subsystem": "bdev", 00:14:55.364 "config": [ 00:14:55.364 { 00:14:55.364 "params": { 00:14:55.364 "io_mechanism": "io_uring", 00:14:55.364 "conserve_cpu": false, 00:14:55.364 "filename": "/dev/nvme0n1", 00:14:55.364 "name": "xnvme_bdev" 00:14:55.364 }, 00:14:55.365 "method": "bdev_xnvme_create" 00:14:55.365 }, 00:14:55.365 { 00:14:55.365 "method": "bdev_wait_for_examine" 00:14:55.365 } 00:14:55.365 ] 00:14:55.365 } 00:14:55.365 ] 00:14:55.365 } 00:14:55.623 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:55.623 fio-3.35 00:14:55.623 Starting 1 thread 00:15:02.190 00:15:02.190 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71452: Wed Nov 20 10:51:50 2024 00:15:02.190 read: IOPS=28.8k, BW=113MiB/s (118MB/s)(564MiB/5001msec) 00:15:02.190 slat (usec): min=3, max=1045, avg= 5.81, stdev= 3.24 00:15:02.190 clat (usec): min=1123, max=4374, avg=1989.77, stdev=225.77 00:15:02.190 lat (usec): min=1131, max=4382, avg=1995.58, stdev=226.20 00:15:02.190 clat percentiles (usec): 00:15:02.190 | 1.00th=[ 1516], 5.00th=[ 1680], 10.00th=[ 1729], 20.00th=[ 1811], 00:15:02.190 | 30.00th=[ 1860], 40.00th=[ 1926], 50.00th=[ 1975], 60.00th=[ 2024], 00:15:02.190 | 70.00th=[ 2089], 80.00th=[ 2147], 90.00th=[ 2278], 95.00th=[ 2376], 00:15:02.190 | 99.00th=[ 2540], 99.50th=[ 2638], 99.90th=[ 3785], 99.95th=[ 4047], 00:15:02.190 | 99.99th=[ 4293] 00:15:02.190 bw ( KiB/s): min=105472, max=123904, per=99.69%, avg=115029.33, stdev=6907.26, samples=9 00:15:02.190 iops : min=26368, max=30976, avg=28757.33, stdev=1726.81, samples=9 00:15:02.190 lat (msec) : 2=55.87%, 4=44.08%, 10=0.06% 00:15:02.190 cpu : usr=29.98%, sys=68.96%, ctx=14, majf=0, minf=762 00:15:02.190 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.8%, 32=50.3%, >=64=1.6% 00:15:02.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.190 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:02.190 issued rwts: total=144256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.190 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:02.190 00:15:02.190 Run status group 0 (all jobs): 00:15:02.190 READ: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=564MiB (591MB), run=5001-5001msec 00:15:02.449 ----------------------------------------------------- 00:15:02.449 Suppressions used: 00:15:02.449 count bytes template 00:15:02.449 1 11 /usr/src/fio/parse.c 00:15:02.449 1 8 libtcmalloc_minimal.so 00:15:02.449 1 904 libcrypto.so 00:15:02.449 ----------------------------------------------------- 00:15:02.449 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:02.708 { 00:15:02.708 "subsystems": [ 00:15:02.708 { 00:15:02.708 "subsystem": "bdev", 00:15:02.708 "config": [ 00:15:02.708 { 00:15:02.708 "params": { 00:15:02.708 "io_mechanism": "io_uring", 00:15:02.708 "conserve_cpu": false, 00:15:02.708 "filename": "/dev/nvme0n1", 00:15:02.708 "name": "xnvme_bdev" 00:15:02.708 }, 00:15:02.708 "method": "bdev_xnvme_create" 00:15:02.708 }, 00:15:02.708 { 00:15:02.708 "method": "bdev_wait_for_examine" 00:15:02.708 } 00:15:02.708 ] 00:15:02.708 } 00:15:02.708 ] 00:15:02.708 } 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:02.708 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:02.709 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:02.709 10:51:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:02.709 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:02.709 fio-3.35 00:15:02.709 Starting 1 thread 00:15:09.275 00:15:09.275 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71544: Wed Nov 20 10:51:57 2024 00:15:09.275 write: IOPS=33.1k, BW=129MiB/s (136MB/s)(648MiB/5001msec); 0 zone resets 00:15:09.275 slat (nsec): min=2264, max=75641, avg=5020.73, stdev=1799.78 00:15:09.275 clat (usec): min=862, max=4264, avg=1731.18, stdev=343.23 00:15:09.275 lat (usec): min=864, max=4283, avg=1736.20, stdev=344.29 00:15:09.275 clat percentiles (usec): 00:15:09.275 | 1.00th=[ 996], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1434], 00:15:09.276 | 30.00th=[ 1696], 40.00th=[ 1762], 50.00th=[ 1811], 60.00th=[ 1876], 00:15:09.276 | 70.00th=[ 1926], 80.00th=[ 1991], 90.00th=[ 2073], 95.00th=[ 2147], 00:15:09.276 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2737], 99.95th=[ 3425], 00:15:09.276 | 99.99th=[ 4178] 00:15:09.276 bw ( KiB/s): min=117248, max=200704, per=100.00%, avg=133817.78, stdev=27433.40, samples=9 00:15:09.276 iops : min=29312, max=50176, avg=33454.89, stdev=6858.16, samples=9 00:15:09.276 lat (usec) : 1000=1.23% 00:15:09.276 lat (msec) : 2=80.86%, 4=17.88%, 10=0.03% 00:15:09.276 cpu : usr=29.90%, sys=69.10%, ctx=19, majf=0, minf=762 00:15:09.276 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:09.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.276 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:09.276 issued rwts: total=0,165760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.276 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:09.276 00:15:09.276 Run status group 0 (all jobs): 00:15:09.276 WRITE: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=648MiB (679MB), run=5001-5001msec 00:15:09.843 ----------------------------------------------------- 00:15:09.843 Suppressions used: 00:15:09.843 count bytes template 00:15:09.843 1 11 /usr/src/fio/parse.c 00:15:09.843 1 8 libtcmalloc_minimal.so 00:15:09.843 1 904 libcrypto.so 00:15:09.843 ----------------------------------------------------- 00:15:09.843 00:15:09.843 ************************************ 00:15:09.843 END TEST xnvme_fio_plugin 00:15:09.843 ************************************ 00:15:09.843 00:15:09.843 real 0m14.552s 00:15:09.843 user 0m6.563s 00:15:09.843 sys 0m7.629s 00:15:09.843 10:51:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.843 10:51:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:09.843 10:51:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:09.843 10:51:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:09.843 10:51:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:09.843 10:51:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:09.843 10:51:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:09.843 10:51:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.843 10:51:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.843 ************************************ 00:15:09.843 START TEST xnvme_rpc 00:15:09.843 ************************************ 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71630 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71630 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71630 ']' 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.843 10:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.107 [2024-11-20 10:51:59.149756] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:15:10.107 [2024-11-20 10:51:59.150096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71630 ] 00:15:10.107 [2024-11-20 10:51:59.327514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.372 [2024-11-20 10:51:59.432251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.324 xnvme_bdev 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.324 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71630 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71630 ']' 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71630 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71630 00:15:11.325 killing process with pid 71630 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71630' 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71630 00:15:11.325 10:52:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71630 00:15:13.852 ************************************ 00:15:13.852 END TEST xnvme_rpc 00:15:13.852 ************************************ 00:15:13.852 00:15:13.852 real 0m3.690s 00:15:13.852 user 0m3.788s 00:15:13.852 sys 0m0.503s 00:15:13.852 10:52:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.852 10:52:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.852 10:52:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:13.852 10:52:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:13.852 10:52:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.852 10:52:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.852 ************************************ 00:15:13.852 START TEST xnvme_bdevperf 00:15:13.852 ************************************ 00:15:13.852 10:52:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:13.852 10:52:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:13.852 10:52:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:13.852 10:52:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:13.852 10:52:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:13.853 10:52:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:13.853 10:52:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:13.853 10:52:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:13.853 { 00:15:13.853 "subsystems": [ 00:15:13.853 { 00:15:13.853 "subsystem": "bdev", 00:15:13.853 "config": [ 00:15:13.853 { 00:15:13.853 "params": { 00:15:13.853 "io_mechanism": "io_uring", 00:15:13.853 "conserve_cpu": true, 00:15:13.853 "filename": "/dev/nvme0n1", 00:15:13.853 "name": "xnvme_bdev" 00:15:13.853 }, 00:15:13.853 "method": "bdev_xnvme_create" 00:15:13.853 }, 00:15:13.853 { 00:15:13.853 "method": "bdev_wait_for_examine" 00:15:13.853 } 00:15:13.853 ] 00:15:13.853 } 00:15:13.853 ] 00:15:13.853 } 00:15:13.853 [2024-11-20 10:52:02.897861] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:15:13.853 [2024-11-20 10:52:02.898108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71710 ] 00:15:13.853 [2024-11-20 10:52:03.078806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.110 [2024-11-20 10:52:03.181026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.368 Running I/O for 5 seconds... 00:15:16.676 40448.00 IOPS, 158.00 MiB/s [2024-11-20T10:52:06.863Z] 37120.00 IOPS, 145.00 MiB/s [2024-11-20T10:52:07.799Z] 34794.67 IOPS, 135.92 MiB/s [2024-11-20T10:52:08.734Z] 33248.00 IOPS, 129.88 MiB/s 00:15:19.481 Latency(us) 00:15:19.481 [2024-11-20T10:52:08.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.481 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:19.481 xnvme_bdev : 5.00 31752.59 124.03 0.00 0.00 2009.69 776.43 8211.74 00:15:19.481 [2024-11-20T10:52:08.734Z] =================================================================================================================== 00:15:19.481 [2024-11-20T10:52:08.734Z] Total : 31752.59 124.03 0.00 0.00 2009.69 776.43 8211.74 00:15:20.417 10:52:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:20.417 10:52:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:20.417 10:52:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:20.417 10:52:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:20.417 10:52:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:20.417 { 00:15:20.417 "subsystems": [ 00:15:20.417 { 00:15:20.417 "subsystem": "bdev", 00:15:20.417 "config": [ 00:15:20.417 { 00:15:20.417 "params": { 00:15:20.417 "io_mechanism": "io_uring", 00:15:20.417 "conserve_cpu": true, 00:15:20.417 "filename": "/dev/nvme0n1", 00:15:20.417 "name": "xnvme_bdev" 00:15:20.417 }, 00:15:20.417 "method": "bdev_xnvme_create" 00:15:20.417 }, 00:15:20.417 { 00:15:20.417 "method": "bdev_wait_for_examine" 00:15:20.417 } 00:15:20.417 ] 00:15:20.417 } 00:15:20.417 ] 00:15:20.417 } 00:15:20.417 [2024-11-20 10:52:09.639726] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:15:20.417 [2024-11-20 10:52:09.639830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71786 ] 00:15:20.676 [2024-11-20 10:52:09.818163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.676 [2024-11-20 10:52:09.924956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.243 Running I/O for 5 seconds... 00:15:23.117 34048.00 IOPS, 133.00 MiB/s [2024-11-20T10:52:13.306Z] 33088.00 IOPS, 129.25 MiB/s [2024-11-20T10:52:14.680Z] 30058.67 IOPS, 117.42 MiB/s [2024-11-20T10:52:15.615Z] 28256.00 IOPS, 110.38 MiB/s 00:15:26.362 Latency(us) 00:15:26.362 [2024-11-20T10:52:15.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.362 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:26.362 xnvme_bdev : 5.00 27232.20 106.38 0.00 0.00 2343.14 1184.39 8264.38 00:15:26.362 [2024-11-20T10:52:15.615Z] =================================================================================================================== 00:15:26.362 [2024-11-20T10:52:15.615Z] Total : 27232.20 106.38 0.00 0.00 2343.14 1184.39 8264.38 00:15:27.300 00:15:27.301 real 0m13.505s 00:15:27.301 user 0m7.142s 00:15:27.301 sys 0m5.823s 00:15:27.301 10:52:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.301 10:52:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.301 ************************************ 00:15:27.301 END TEST xnvme_bdevperf 00:15:27.301 ************************************ 00:15:27.301 10:52:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:27.301 10:52:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:27.301 10:52:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.301 10:52:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.301 ************************************ 00:15:27.301 START TEST xnvme_fio_plugin 00:15:27.301 ************************************ 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:27.301 10:52:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.301 { 00:15:27.301 "subsystems": [ 00:15:27.301 { 00:15:27.301 "subsystem": "bdev", 00:15:27.301 "config": [ 00:15:27.301 { 00:15:27.301 "params": { 00:15:27.301 "io_mechanism": "io_uring", 00:15:27.301 "conserve_cpu": true, 00:15:27.301 "filename": "/dev/nvme0n1", 00:15:27.301 "name": "xnvme_bdev" 00:15:27.301 }, 00:15:27.301 "method": "bdev_xnvme_create" 00:15:27.301 }, 00:15:27.301 { 00:15:27.301 "method": "bdev_wait_for_examine" 00:15:27.301 } 00:15:27.301 ] 00:15:27.301 } 00:15:27.301 ] 00:15:27.301 } 00:15:27.560 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:27.560 fio-3.35 00:15:27.560 Starting 1 thread 00:15:34.209 00:15:34.209 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71910: Wed Nov 20 10:52:22 2024 00:15:34.209 read: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(456MiB/5002msec) 00:15:34.209 slat (nsec): min=3399, max=66387, avg=8021.28, stdev=3281.30 00:15:34.209 clat (usec): min=1620, max=3283, avg=2419.14, stdev=227.35 00:15:34.209 lat (usec): min=1626, max=3302, avg=2427.16, stdev=228.21 00:15:34.209 clat percentiles (usec): 00:15:34.209 | 1.00th=[ 1844], 5.00th=[ 2024], 10.00th=[ 2147], 20.00th=[ 2245], 00:15:34.209 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:15:34.209 | 70.00th=[ 2573], 80.00th=[ 2638], 90.00th=[ 2704], 95.00th=[ 2769], 00:15:34.209 | 99.00th=[ 2868], 99.50th=[ 2900], 99.90th=[ 2999], 99.95th=[ 3097], 00:15:34.209 | 99.99th=[ 3228] 00:15:34.209 bw ( KiB/s): min=90443, max=97280, per=100.00%, avg=93334.56, stdev=2481.23, samples=9 00:15:34.209 iops : min=22610, max=24320, avg=23333.56, stdev=620.42, samples=9 00:15:34.209 lat (msec) : 2=4.29%, 4=95.71% 00:15:34.210 cpu : usr=40.53%, sys=54.51%, ctx=15, majf=0, minf=762 00:15:34.210 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:34.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.210 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:34.210 issued rwts: total=116672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.210 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:34.210 00:15:34.210 Run status group 0 (all jobs): 00:15:34.210 READ: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=456MiB (478MB), run=5002-5002msec 00:15:34.469 ----------------------------------------------------- 00:15:34.469 Suppressions used: 00:15:34.469 count bytes template 00:15:34.469 1 11 /usr/src/fio/parse.c 00:15:34.469 1 8 libtcmalloc_minimal.so 00:15:34.469 1 904 libcrypto.so 00:15:34.470 ----------------------------------------------------- 00:15:34.470 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:34.470 10:52:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.470 { 00:15:34.470 "subsystems": [ 00:15:34.470 { 00:15:34.470 "subsystem": "bdev", 00:15:34.470 "config": [ 00:15:34.470 { 00:15:34.470 "params": { 00:15:34.470 "io_mechanism": "io_uring", 00:15:34.470 "conserve_cpu": true, 00:15:34.470 "filename": "/dev/nvme0n1", 00:15:34.470 "name": "xnvme_bdev" 00:15:34.470 }, 00:15:34.470 "method": "bdev_xnvme_create" 00:15:34.470 }, 00:15:34.470 { 00:15:34.470 "method": "bdev_wait_for_examine" 00:15:34.470 } 00:15:34.470 ] 00:15:34.470 } 00:15:34.470 ] 00:15:34.470 } 00:15:34.729 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:34.729 fio-3.35 00:15:34.729 Starting 1 thread 00:15:41.296 00:15:41.296 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72002: Wed Nov 20 10:52:29 2024 00:15:41.296 write: IOPS=29.0k, BW=113MiB/s (119MB/s)(567MiB/5001msec); 0 zone resets 00:15:41.296 slat (usec): min=3, max=273, avg= 5.93, stdev= 3.07 00:15:41.296 clat (usec): min=1158, max=3784, avg=1969.19, stdev=380.69 00:15:41.296 lat (usec): min=1163, max=3797, avg=1975.12, stdev=382.16 00:15:41.296 clat percentiles (usec): 00:15:41.296 | 1.00th=[ 1418], 5.00th=[ 1483], 10.00th=[ 1549], 20.00th=[ 1631], 00:15:41.296 | 30.00th=[ 1713], 40.00th=[ 1778], 50.00th=[ 1860], 60.00th=[ 1958], 00:15:41.296 | 70.00th=[ 2147], 80.00th=[ 2343], 90.00th=[ 2573], 95.00th=[ 2671], 00:15:41.296 | 99.00th=[ 2835], 99.50th=[ 2900], 99.90th=[ 3195], 99.95th=[ 3359], 00:15:41.296 | 99.99th=[ 3654] 00:15:41.296 bw ( KiB/s): min=94208, max=142848, per=97.36%, avg=113009.78, stdev=17661.73, samples=9 00:15:41.296 iops : min=23552, max=35712, avg=28252.44, stdev=4415.43, samples=9 00:15:41.296 lat (msec) : 2=62.47%, 4=37.53% 00:15:41.296 cpu : usr=46.12%, sys=49.60%, ctx=17, majf=0, minf=762 00:15:41.296 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:41.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.297 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:41.297 issued rwts: total=0,145120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.297 00:15:41.297 Run status group 0 (all jobs): 00:15:41.297 WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=567MiB (594MB), run=5001-5001msec 00:15:41.863 ----------------------------------------------------- 00:15:41.863 Suppressions used: 00:15:41.863 count bytes template 00:15:41.863 1 11 /usr/src/fio/parse.c 00:15:41.863 1 8 libtcmalloc_minimal.so 00:15:41.863 1 904 libcrypto.so 00:15:41.863 ----------------------------------------------------- 00:15:41.863 00:15:41.863 00:15:41.863 real 0m14.514s 00:15:41.863 user 0m7.977s 00:15:41.863 sys 0m5.823s 00:15:41.863 10:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.863 10:52:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.863 ************************************ 00:15:41.863 END TEST xnvme_fio_plugin 00:15:41.863 ************************************ 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:41.863 10:52:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:41.863 10:52:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:41.863 10:52:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.863 10:52:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.863 ************************************ 00:15:41.863 START TEST xnvme_rpc 00:15:41.863 ************************************ 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:41.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72088 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72088 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72088 ']' 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.863 10:52:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:41.863 [2024-11-20 10:52:31.077237] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:15:41.863 [2024-11-20 10:52:31.077353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72088 ] 00:15:42.122 [2024-11-20 10:52:31.258003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.122 [2024-11-20 10:52:31.371663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.060 xnvme_bdev 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:43.060 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72088 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72088 ']' 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72088 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72088 00:15:43.320 killing process with pid 72088 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72088' 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72088 00:15:43.320 10:52:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72088 00:15:45.858 ************************************ 00:15:45.858 00:15:45.858 real 0m3.672s 00:15:45.858 user 0m3.752s 00:15:45.858 sys 0m0.514s 00:15:45.858 10:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.858 10:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.858 END TEST xnvme_rpc 00:15:45.858 ************************************ 00:15:45.858 10:52:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:45.858 10:52:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:45.858 10:52:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.858 10:52:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.858 ************************************ 00:15:45.858 START TEST xnvme_bdevperf 00:15:45.858 ************************************ 00:15:45.858 10:52:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:45.858 10:52:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:45.858 10:52:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:45.858 10:52:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:45.858 10:52:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:45.858 10:52:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:45.858 10:52:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:45.858 10:52:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:45.858 { 00:15:45.858 "subsystems": [ 00:15:45.858 { 00:15:45.858 "subsystem": "bdev", 00:15:45.858 "config": [ 00:15:45.858 { 00:15:45.858 "params": { 00:15:45.858 "io_mechanism": "io_uring_cmd", 00:15:45.858 "conserve_cpu": false, 00:15:45.858 "filename": "/dev/ng0n1", 00:15:45.858 "name": "xnvme_bdev" 00:15:45.858 }, 00:15:45.858 "method": "bdev_xnvme_create" 00:15:45.858 }, 00:15:45.858 { 00:15:45.858 "method": "bdev_wait_for_examine" 00:15:45.858 } 00:15:45.858 ] 00:15:45.858 } 00:15:45.858 ] 00:15:45.858 } 00:15:45.858 [2024-11-20 10:52:34.805921] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:15:45.858 [2024-11-20 10:52:34.806029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72172 ] 00:15:45.858 [2024-11-20 10:52:34.986296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.858 [2024-11-20 10:52:35.086029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.426 Running I/O for 5 seconds... 00:15:48.298 29440.00 IOPS, 115.00 MiB/s [2024-11-20T10:52:38.486Z] 28416.00 IOPS, 111.00 MiB/s [2024-11-20T10:52:39.432Z] 27029.33 IOPS, 105.58 MiB/s [2024-11-20T10:52:40.808Z] 26688.00 IOPS, 104.25 MiB/s 00:15:51.555 Latency(us) 00:15:51.555 [2024-11-20T10:52:40.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.555 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:51.555 xnvme_bdev : 5.00 26963.49 105.33 0.00 0.00 2366.17 1052.79 7685.35 00:15:51.555 [2024-11-20T10:52:40.808Z] =================================================================================================================== 00:15:51.555 [2024-11-20T10:52:40.808Z] Total : 26963.49 105.33 0.00 0.00 2366.17 1052.79 7685.35 00:15:52.491 10:52:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:52.491 10:52:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:52.491 10:52:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:52.491 10:52:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:52.491 10:52:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:52.491 { 00:15:52.491 "subsystems": [ 00:15:52.491 { 00:15:52.491 "subsystem": "bdev", 00:15:52.491 "config": [ 00:15:52.491 { 00:15:52.491 "params": { 00:15:52.491 "io_mechanism": "io_uring_cmd", 00:15:52.491 "conserve_cpu": false, 00:15:52.491 "filename": "/dev/ng0n1", 00:15:52.491 "name": "xnvme_bdev" 00:15:52.491 }, 00:15:52.491 "method": "bdev_xnvme_create" 00:15:52.491 }, 00:15:52.491 { 00:15:52.491 "method": "bdev_wait_for_examine" 00:15:52.491 } 00:15:52.491 ] 00:15:52.491 } 00:15:52.491 ] 00:15:52.491 } 00:15:52.491 [2024-11-20 10:52:41.527104] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:15:52.491 [2024-11-20 10:52:41.527355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72246 ] 00:15:52.491 [2024-11-20 10:52:41.708070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.750 [2024-11-20 10:52:41.811027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.008 Running I/O for 5 seconds... 00:15:54.877 25600.00 IOPS, 100.00 MiB/s [2024-11-20T10:52:45.508Z] 24480.00 IOPS, 95.62 MiB/s [2024-11-20T10:52:46.443Z] 24149.33 IOPS, 94.33 MiB/s [2024-11-20T10:52:47.379Z] 23936.00 IOPS, 93.50 MiB/s 00:15:58.126 Latency(us) 00:15:58.126 [2024-11-20T10:52:47.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.126 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:58.126 xnvme_bdev : 5.01 23754.08 92.79 0.00 0.00 2685.25 1335.72 8001.18 00:15:58.126 [2024-11-20T10:52:47.379Z] =================================================================================================================== 00:15:58.126 [2024-11-20T10:52:47.379Z] Total : 23754.08 92.79 0.00 0.00 2685.25 1335.72 8001.18 00:15:59.061 10:52:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:59.062 10:52:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:59.062 10:52:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:59.062 10:52:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:59.062 10:52:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:59.062 { 00:15:59.062 "subsystems": [ 00:15:59.062 { 00:15:59.062 "subsystem": "bdev", 00:15:59.062 "config": [ 00:15:59.062 { 00:15:59.062 "params": { 00:15:59.062 "io_mechanism": "io_uring_cmd", 00:15:59.062 "conserve_cpu": false, 00:15:59.062 "filename": "/dev/ng0n1", 00:15:59.062 "name": "xnvme_bdev" 00:15:59.062 }, 00:15:59.062 "method": "bdev_xnvme_create" 00:15:59.062 }, 00:15:59.062 { 00:15:59.062 "method": "bdev_wait_for_examine" 00:15:59.062 } 00:15:59.062 ] 00:15:59.062 } 00:15:59.062 ] 00:15:59.062 } 00:15:59.062 [2024-11-20 10:52:48.276306] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:15:59.062 [2024-11-20 10:52:48.276584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72323 ] 00:15:59.319 [2024-11-20 10:52:48.457051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.319 [2024-11-20 10:52:48.554567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.888 Running I/O for 5 seconds... 00:16:01.759 73216.00 IOPS, 286.00 MiB/s [2024-11-20T10:52:51.949Z] 73152.00 IOPS, 285.75 MiB/s [2024-11-20T10:52:52.884Z] 73152.00 IOPS, 285.75 MiB/s [2024-11-20T10:52:54.261Z] 73184.00 IOPS, 285.88 MiB/s 00:16:05.008 Latency(us) 00:16:05.008 [2024-11-20T10:52:54.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.008 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:05.008 xnvme_bdev : 5.00 73196.36 285.92 0.00 0.00 871.87 681.02 2368.77 00:16:05.008 [2024-11-20T10:52:54.262Z] =================================================================================================================== 00:16:05.009 [2024-11-20T10:52:54.262Z] Total : 73196.36 285.92 0.00 0.00 871.87 681.02 2368.77 00:16:05.943 10:52:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:05.943 10:52:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:05.943 10:52:54 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:05.943 10:52:54 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:05.943 10:52:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:05.943 { 00:16:05.943 "subsystems": [ 00:16:05.943 { 00:16:05.943 "subsystem": "bdev", 00:16:05.943 "config": [ 00:16:05.943 { 00:16:05.943 "params": { 00:16:05.944 "io_mechanism": "io_uring_cmd", 00:16:05.944 "conserve_cpu": false, 00:16:05.944 "filename": "/dev/ng0n1", 00:16:05.944 "name": "xnvme_bdev" 00:16:05.944 }, 00:16:05.944 "method": "bdev_xnvme_create" 00:16:05.944 }, 00:16:05.944 { 00:16:05.944 "method": "bdev_wait_for_examine" 00:16:05.944 } 00:16:05.944 ] 00:16:05.944 } 00:16:05.944 ] 00:16:05.944 } 00:16:05.944 [2024-11-20 10:52:55.004636] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:05.944 [2024-11-20 10:52:55.004747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72397 ] 00:16:05.944 [2024-11-20 10:52:55.182762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.202 [2024-11-20 10:52:55.286889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.459 Running I/O for 5 seconds... 00:16:08.765 63343.00 IOPS, 247.43 MiB/s [2024-11-20T10:52:58.953Z] 61198.00 IOPS, 239.05 MiB/s [2024-11-20T10:52:59.887Z] 54902.67 IOPS, 214.46 MiB/s [2024-11-20T10:53:00.822Z] 48365.75 IOPS, 188.93 MiB/s [2024-11-20T10:53:00.822Z] 49396.80 IOPS, 192.96 MiB/s 00:16:11.569 Latency(us) 00:16:11.569 [2024-11-20T10:53:00.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.569 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:11.569 xnvme_bdev : 5.01 49364.52 192.83 0.00 0.00 1293.28 62.92 33057.52 00:16:11.569 [2024-11-20T10:53:00.822Z] =================================================================================================================== 00:16:11.569 [2024-11-20T10:53:00.822Z] Total : 49364.52 192.83 0.00 0.00 1293.28 62.92 33057.52 00:16:12.505 00:16:12.505 real 0m26.941s 00:16:12.505 user 0m13.807s 00:16:12.505 sys 0m12.690s 00:16:12.505 10:53:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.505 10:53:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:12.505 ************************************ 00:16:12.505 END TEST xnvme_bdevperf 00:16:12.505 ************************************ 00:16:12.505 10:53:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:12.505 10:53:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:12.505 10:53:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.505 10:53:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.505 ************************************ 00:16:12.505 START TEST xnvme_fio_plugin 00:16:12.505 ************************************ 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:12.505 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:12.763 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:12.763 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:12.763 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:12.763 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:12.763 10:53:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:12.763 { 00:16:12.763 "subsystems": [ 00:16:12.763 { 00:16:12.763 "subsystem": "bdev", 00:16:12.763 "config": [ 00:16:12.763 { 00:16:12.763 "params": { 00:16:12.763 "io_mechanism": "io_uring_cmd", 00:16:12.764 "conserve_cpu": false, 00:16:12.764 "filename": "/dev/ng0n1", 00:16:12.764 "name": "xnvme_bdev" 00:16:12.764 }, 00:16:12.764 "method": "bdev_xnvme_create" 00:16:12.764 }, 00:16:12.764 { 00:16:12.764 "method": "bdev_wait_for_examine" 00:16:12.764 } 00:16:12.764 ] 00:16:12.764 } 00:16:12.764 ] 00:16:12.764 } 00:16:12.764 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:12.764 fio-3.35 00:16:12.764 Starting 1 thread 00:16:19.327 00:16:19.327 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72520: Wed Nov 20 10:53:07 2024 00:16:19.327 read: IOPS=24.3k, BW=95.1MiB/s (99.7MB/s)(476MiB/5002msec) 00:16:19.327 slat (usec): min=2, max=164, avg= 7.97, stdev= 3.49 00:16:19.327 clat (usec): min=1157, max=3544, avg=2306.05, stdev=259.34 00:16:19.327 lat (usec): min=1160, max=3557, avg=2314.02, stdev=260.47 00:16:19.327 clat percentiles (usec): 00:16:19.327 | 1.00th=[ 1369], 5.00th=[ 1844], 10.00th=[ 2024], 20.00th=[ 2147], 00:16:19.327 | 30.00th=[ 2212], 40.00th=[ 2278], 50.00th=[ 2343], 60.00th=[ 2376], 00:16:19.327 | 70.00th=[ 2442], 80.00th=[ 2540], 90.00th=[ 2606], 95.00th=[ 2638], 00:16:19.327 | 99.00th=[ 2737], 99.50th=[ 2769], 99.90th=[ 2900], 99.95th=[ 3032], 00:16:19.327 | 99.99th=[ 3425] 00:16:19.327 bw ( KiB/s): min=94155, max=109349, per=100.00%, avg=97754.89, stdev=4845.73, samples=9 00:16:19.327 iops : min=23538, max=27337, avg=24438.56, stdev=1211.46, samples=9 00:16:19.327 lat (msec) : 2=8.87%, 4=91.13% 00:16:19.327 cpu : usr=40.05%, sys=58.27%, ctx=9, majf=0, minf=762 00:16:19.327 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:19.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.327 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:19.327 issued rwts: total=121792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.327 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:19.327 00:16:19.327 Run status group 0 (all jobs): 00:16:19.327 READ: bw=95.1MiB/s (99.7MB/s), 95.1MiB/s-95.1MiB/s (99.7MB/s-99.7MB/s), io=476MiB (499MB), run=5002-5002msec 00:16:19.895 ----------------------------------------------------- 00:16:19.895 Suppressions used: 00:16:19.895 count bytes template 00:16:19.895 1 11 /usr/src/fio/parse.c 00:16:19.895 1 8 libtcmalloc_minimal.so 00:16:19.895 1 904 libcrypto.so 00:16:19.895 ----------------------------------------------------- 00:16:19.895 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:19.895 10:53:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:19.895 10:53:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:19.895 10:53:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:19.895 10:53:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:19.895 10:53:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:19.895 10:53:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:19.895 { 00:16:19.895 "subsystems": [ 00:16:19.895 { 00:16:19.895 "subsystem": "bdev", 00:16:19.895 "config": [ 00:16:19.895 { 00:16:19.895 "params": { 00:16:19.895 "io_mechanism": "io_uring_cmd", 00:16:19.895 "conserve_cpu": false, 00:16:19.895 "filename": "/dev/ng0n1", 00:16:19.895 "name": "xnvme_bdev" 00:16:19.895 }, 00:16:19.895 "method": "bdev_xnvme_create" 00:16:19.895 }, 00:16:19.895 { 00:16:19.895 "method": "bdev_wait_for_examine" 00:16:19.895 } 00:16:19.895 ] 00:16:19.895 } 00:16:19.895 ] 00:16:19.895 } 00:16:20.170 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:20.170 fio-3.35 00:16:20.170 Starting 1 thread 00:16:26.737 00:16:26.737 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72619: Wed Nov 20 10:53:14 2024 00:16:26.737 write: IOPS=24.8k, BW=96.8MiB/s (102MB/s)(484MiB/5001msec); 0 zone resets 00:16:26.737 slat (nsec): min=2278, max=69739, avg=8010.47, stdev=3494.35 00:16:26.737 clat (usec): min=967, max=3755, avg=2260.86, stdev=351.52 00:16:26.737 lat (usec): min=970, max=3768, avg=2268.87, stdev=352.83 00:16:26.737 clat percentiles (usec): 00:16:26.737 | 1.00th=[ 1139], 5.00th=[ 1401], 10.00th=[ 1893], 20.00th=[ 2089], 00:16:26.737 | 30.00th=[ 2180], 40.00th=[ 2245], 50.00th=[ 2311], 60.00th=[ 2376], 00:16:26.737 | 70.00th=[ 2442], 80.00th=[ 2507], 90.00th=[ 2606], 95.00th=[ 2671], 00:16:26.737 | 99.00th=[ 3064], 99.50th=[ 3294], 99.90th=[ 3556], 99.95th=[ 3589], 00:16:26.737 | 99.99th=[ 3687] 00:16:26.737 bw ( KiB/s): min=94208, max=115250, per=100.00%, avg=99767.33, stdev=7634.70, samples=9 00:16:26.737 iops : min=23552, max=28812, avg=24941.78, stdev=1908.55, samples=9 00:16:26.737 lat (usec) : 1000=0.03% 00:16:26.737 lat (msec) : 2=13.40%, 4=86.57% 00:16:26.737 cpu : usr=40.72%, sys=57.74%, ctx=7, majf=0, minf=762 00:16:26.737 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:26.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.737 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:26.737 issued rwts: total=0,123968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.737 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:26.737 00:16:26.737 Run status group 0 (all jobs): 00:16:26.737 WRITE: bw=96.8MiB/s (102MB/s), 96.8MiB/s-96.8MiB/s (102MB/s-102MB/s), io=484MiB (508MB), run=5001-5001msec 00:16:26.996 ----------------------------------------------------- 00:16:26.996 Suppressions used: 00:16:26.996 count bytes template 00:16:26.996 1 11 /usr/src/fio/parse.c 00:16:26.996 1 8 libtcmalloc_minimal.so 00:16:26.996 1 904 libcrypto.so 00:16:26.996 ----------------------------------------------------- 00:16:26.996 00:16:26.996 00:16:26.996 real 0m14.507s 00:16:26.996 user 0m7.714s 00:16:26.996 sys 0m6.384s 00:16:26.996 10:53:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.996 10:53:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:26.996 ************************************ 00:16:26.996 END TEST xnvme_fio_plugin 00:16:26.996 ************************************ 00:16:27.254 10:53:16 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:27.254 10:53:16 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:27.254 10:53:16 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:27.254 10:53:16 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:27.254 10:53:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:27.254 10:53:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.254 10:53:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.254 ************************************ 00:16:27.254 START TEST xnvme_rpc 00:16:27.254 ************************************ 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72699 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72699 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72699 ']' 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.254 10:53:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.254 [2024-11-20 10:53:16.418088] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:27.254 [2024-11-20 10:53:16.418204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72699 ] 00:16:27.512 [2024-11-20 10:53:16.598615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.512 [2024-11-20 10:53:16.701706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.449 xnvme_bdev 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:28.449 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72699 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72699 ']' 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72699 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72699 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.708 killing process with pid 72699 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72699' 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72699 00:16:28.708 10:53:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72699 00:16:31.247 00:16:31.247 real 0m3.696s 00:16:31.247 user 0m3.764s 00:16:31.247 sys 0m0.515s 00:16:31.247 10:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.247 10:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.247 ************************************ 00:16:31.247 END TEST xnvme_rpc 00:16:31.247 ************************************ 00:16:31.247 10:53:20 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:31.247 10:53:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.247 10:53:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.247 10:53:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.247 ************************************ 00:16:31.247 START TEST xnvme_bdevperf 00:16:31.247 ************************************ 00:16:31.247 10:53:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:31.247 10:53:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:31.247 10:53:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:31.247 10:53:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:31.247 10:53:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:31.247 10:53:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:31.247 10:53:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:31.247 10:53:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:31.247 { 00:16:31.247 "subsystems": [ 00:16:31.247 { 00:16:31.247 "subsystem": "bdev", 00:16:31.247 "config": [ 00:16:31.247 { 00:16:31.247 "params": { 00:16:31.247 "io_mechanism": "io_uring_cmd", 00:16:31.247 "conserve_cpu": true, 00:16:31.247 "filename": "/dev/ng0n1", 00:16:31.247 "name": "xnvme_bdev" 00:16:31.247 }, 00:16:31.247 "method": "bdev_xnvme_create" 00:16:31.247 }, 00:16:31.247 { 00:16:31.247 "method": "bdev_wait_for_examine" 00:16:31.247 } 00:16:31.247 ] 00:16:31.247 } 00:16:31.247 ] 00:16:31.247 } 00:16:31.247 [2024-11-20 10:53:20.165504] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:31.247 [2024-11-20 10:53:20.165635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72788 ] 00:16:31.247 [2024-11-20 10:53:20.343536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.247 [2024-11-20 10:53:20.446770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.814 Running I/O for 5 seconds... 00:16:33.685 26048.00 IOPS, 101.75 MiB/s [2024-11-20T10:53:23.872Z] 25088.00 IOPS, 98.00 MiB/s [2024-11-20T10:53:24.808Z] 24746.67 IOPS, 96.67 MiB/s [2024-11-20T10:53:26.185Z] 24624.00 IOPS, 96.19 MiB/s [2024-11-20T10:53:26.185Z] 24755.20 IOPS, 96.70 MiB/s 00:16:36.932 Latency(us) 00:16:36.932 [2024-11-20T10:53:26.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.932 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:36.932 xnvme_bdev : 5.01 24725.46 96.58 0.00 0.00 2580.25 1065.95 8159.10 00:16:36.932 [2024-11-20T10:53:26.185Z] =================================================================================================================== 00:16:36.932 [2024-11-20T10:53:26.185Z] Total : 24725.46 96.58 0.00 0.00 2580.25 1065.95 8159.10 00:16:37.869 10:53:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:37.869 10:53:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:37.869 10:53:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:37.869 10:53:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:37.869 10:53:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:37.869 { 00:16:37.869 "subsystems": [ 00:16:37.869 { 00:16:37.869 "subsystem": "bdev", 00:16:37.869 "config": [ 00:16:37.869 { 00:16:37.869 "params": { 00:16:37.869 "io_mechanism": "io_uring_cmd", 00:16:37.869 "conserve_cpu": true, 00:16:37.869 "filename": "/dev/ng0n1", 00:16:37.869 "name": "xnvme_bdev" 00:16:37.869 }, 00:16:37.869 "method": "bdev_xnvme_create" 00:16:37.869 }, 00:16:37.869 { 00:16:37.869 "method": "bdev_wait_for_examine" 00:16:37.869 } 00:16:37.869 ] 00:16:37.869 } 00:16:37.869 ] 00:16:37.869 } 00:16:37.869 [2024-11-20 10:53:26.939461] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:37.869 [2024-11-20 10:53:26.939571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72865 ] 00:16:37.869 [2024-11-20 10:53:27.119409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.127 [2024-11-20 10:53:27.224822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.386 Running I/O for 5 seconds... 00:16:40.375 25472.00 IOPS, 99.50 MiB/s [2024-11-20T10:53:31.004Z] 24576.00 IOPS, 96.00 MiB/s [2024-11-20T10:53:31.570Z] 25861.33 IOPS, 101.02 MiB/s [2024-11-20T10:53:32.947Z] 25794.25 IOPS, 100.76 MiB/s [2024-11-20T10:53:32.947Z] 25384.20 IOPS, 99.16 MiB/s 00:16:43.694 Latency(us) 00:16:43.694 [2024-11-20T10:53:32.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.694 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:43.694 xnvme_bdev : 5.01 25350.97 99.03 0.00 0.00 2516.50 50.79 8632.85 00:16:43.694 [2024-11-20T10:53:32.947Z] =================================================================================================================== 00:16:43.694 [2024-11-20T10:53:32.947Z] Total : 25350.97 99.03 0.00 0.00 2516.50 50.79 8632.85 00:16:44.629 10:53:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:44.629 10:53:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:44.629 10:53:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:44.629 10:53:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:44.629 10:53:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:44.629 { 00:16:44.629 "subsystems": [ 00:16:44.629 { 00:16:44.629 "subsystem": "bdev", 00:16:44.629 "config": [ 00:16:44.629 { 00:16:44.629 "params": { 00:16:44.629 "io_mechanism": "io_uring_cmd", 00:16:44.629 "conserve_cpu": true, 00:16:44.629 "filename": "/dev/ng0n1", 00:16:44.629 "name": "xnvme_bdev" 00:16:44.629 }, 00:16:44.629 "method": "bdev_xnvme_create" 00:16:44.629 }, 00:16:44.629 { 00:16:44.629 "method": "bdev_wait_for_examine" 00:16:44.629 } 00:16:44.629 ] 00:16:44.629 } 00:16:44.629 ] 00:16:44.629 } 00:16:44.629 [2024-11-20 10:53:33.721362] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:44.629 [2024-11-20 10:53:33.721469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72946 ] 00:16:44.887 [2024-11-20 10:53:33.901557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.887 [2024-11-20 10:53:34.007603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.145 Running I/O for 5 seconds... 00:16:47.460 72064.00 IOPS, 281.50 MiB/s [2024-11-20T10:53:37.649Z] 72160.00 IOPS, 281.88 MiB/s [2024-11-20T10:53:38.588Z] 72213.33 IOPS, 282.08 MiB/s [2024-11-20T10:53:39.525Z] 72256.00 IOPS, 282.25 MiB/s [2024-11-20T10:53:39.525Z] 72217.60 IOPS, 282.10 MiB/s 00:16:50.272 Latency(us) 00:16:50.272 [2024-11-20T10:53:39.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.272 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:50.272 xnvme_bdev : 5.00 72202.70 282.04 0.00 0.00 883.70 615.22 5474.49 00:16:50.272 [2024-11-20T10:53:39.525Z] =================================================================================================================== 00:16:50.272 [2024-11-20T10:53:39.525Z] Total : 72202.70 282.04 0.00 0.00 883.70 615.22 5474.49 00:16:51.208 10:53:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:51.208 10:53:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:51.208 10:53:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:51.208 10:53:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:51.208 10:53:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:51.208 { 00:16:51.208 "subsystems": [ 00:16:51.208 { 00:16:51.208 "subsystem": "bdev", 00:16:51.208 "config": [ 00:16:51.208 { 00:16:51.208 "params": { 00:16:51.208 "io_mechanism": "io_uring_cmd", 00:16:51.208 "conserve_cpu": true, 00:16:51.208 "filename": "/dev/ng0n1", 00:16:51.208 "name": "xnvme_bdev" 00:16:51.208 }, 00:16:51.208 "method": "bdev_xnvme_create" 00:16:51.208 }, 00:16:51.208 { 00:16:51.208 "method": "bdev_wait_for_examine" 00:16:51.208 } 00:16:51.208 ] 00:16:51.208 } 00:16:51.208 ] 00:16:51.208 } 00:16:51.467 [2024-11-20 10:53:40.469922] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:16:51.467 [2024-11-20 10:53:40.470027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73030 ] 00:16:51.467 [2024-11-20 10:53:40.651226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.726 [2024-11-20 10:53:40.760223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.984 Running I/O for 5 seconds... 00:16:53.853 35560.00 IOPS, 138.91 MiB/s [2024-11-20T10:53:44.482Z] 35827.50 IOPS, 139.95 MiB/s [2024-11-20T10:53:45.415Z] 35408.00 IOPS, 138.31 MiB/s [2024-11-20T10:53:46.350Z] 38490.25 IOPS, 150.35 MiB/s 00:16:57.097 Latency(us) 00:16:57.097 [2024-11-20T10:53:46.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.097 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:57.097 xnvme_bdev : 5.00 43844.62 171.27 0.00 0.00 1452.97 190.82 13686.23 00:16:57.097 [2024-11-20T10:53:46.350Z] =================================================================================================================== 00:16:57.097 [2024-11-20T10:53:46.350Z] Total : 43844.62 171.27 0.00 0.00 1452.97 190.82 13686.23 00:16:58.034 00:16:58.034 real 0m27.044s 00:16:58.034 user 0m16.417s 00:16:58.034 sys 0m8.722s 00:16:58.034 10:53:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.034 10:53:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:58.034 ************************************ 00:16:58.034 END TEST xnvme_bdevperf 00:16:58.034 ************************************ 00:16:58.034 10:53:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:58.034 10:53:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:58.034 10:53:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.034 10:53:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:58.034 ************************************ 00:16:58.034 START TEST xnvme_fio_plugin 00:16:58.034 ************************************ 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:58.034 10:53:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:58.034 { 00:16:58.034 "subsystems": [ 00:16:58.034 { 00:16:58.034 "subsystem": "bdev", 00:16:58.034 "config": [ 00:16:58.034 { 00:16:58.034 "params": { 00:16:58.034 "io_mechanism": "io_uring_cmd", 00:16:58.034 "conserve_cpu": true, 00:16:58.034 "filename": "/dev/ng0n1", 00:16:58.034 "name": "xnvme_bdev" 00:16:58.034 }, 00:16:58.034 "method": "bdev_xnvme_create" 00:16:58.034 }, 00:16:58.034 { 00:16:58.034 "method": "bdev_wait_for_examine" 00:16:58.035 } 00:16:58.035 ] 00:16:58.035 } 00:16:58.035 ] 00:16:58.035 } 00:16:58.293 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:58.293 fio-3.35 00:16:58.293 Starting 1 thread 00:17:04.853 00:17:04.853 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73143: Wed Nov 20 10:53:53 2024 00:17:04.853 read: IOPS=24.6k, BW=96.0MiB/s (101MB/s)(480MiB/5002msec) 00:17:04.853 slat (nsec): min=2238, max=78625, avg=7729.93, stdev=3271.02 00:17:04.853 clat (usec): min=1018, max=2994, avg=2292.56, stdev=239.26 00:17:04.853 lat (usec): min=1021, max=3009, avg=2300.29, stdev=240.24 00:17:04.853 clat percentiles (usec): 00:17:04.853 | 1.00th=[ 1352], 5.00th=[ 1942], 10.00th=[ 2040], 20.00th=[ 2114], 00:17:04.853 | 30.00th=[ 2180], 40.00th=[ 2245], 50.00th=[ 2311], 60.00th=[ 2376], 00:17:04.853 | 70.00th=[ 2409], 80.00th=[ 2507], 90.00th=[ 2573], 95.00th=[ 2638], 00:17:04.853 | 99.00th=[ 2737], 99.50th=[ 2769], 99.90th=[ 2835], 99.95th=[ 2868], 00:17:04.853 | 99.99th=[ 2933] 00:17:04.853 bw ( KiB/s): min=92672, max=109568, per=100.00%, avg=98373.78, stdev=4915.39, samples=9 00:17:04.853 iops : min=23168, max=27392, avg=24593.44, stdev=1228.85, samples=9 00:17:04.853 lat (msec) : 2=7.03%, 4=92.97% 00:17:04.853 cpu : usr=50.05%, sys=46.25%, ctx=7, majf=0, minf=762 00:17:04.853 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:04.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.853 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:04.853 issued rwts: total=122991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:04.853 00:17:04.853 Run status group 0 (all jobs): 00:17:04.853 READ: bw=96.0MiB/s (101MB/s), 96.0MiB/s-96.0MiB/s (101MB/s-101MB/s), io=480MiB (504MB), run=5002-5002msec 00:17:05.420 ----------------------------------------------------- 00:17:05.420 Suppressions used: 00:17:05.420 count bytes template 00:17:05.420 1 11 /usr/src/fio/parse.c 00:17:05.420 1 8 libtcmalloc_minimal.so 00:17:05.420 1 904 libcrypto.so 00:17:05.420 ----------------------------------------------------- 00:17:05.420 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:05.420 10:53:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:05.420 { 00:17:05.420 "subsystems": [ 00:17:05.420 { 00:17:05.420 "subsystem": "bdev", 00:17:05.420 "config": [ 00:17:05.420 { 00:17:05.420 "params": { 00:17:05.420 "io_mechanism": "io_uring_cmd", 00:17:05.420 "conserve_cpu": true, 00:17:05.420 "filename": "/dev/ng0n1", 00:17:05.420 "name": "xnvme_bdev" 00:17:05.420 }, 00:17:05.420 "method": "bdev_xnvme_create" 00:17:05.420 }, 00:17:05.420 { 00:17:05.420 "method": "bdev_wait_for_examine" 00:17:05.420 } 00:17:05.420 ] 00:17:05.420 } 00:17:05.420 ] 00:17:05.420 } 00:17:05.679 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:05.679 fio-3.35 00:17:05.679 Starting 1 thread 00:17:12.241 00:17:12.241 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73243: Wed Nov 20 10:54:00 2024 00:17:12.241 write: IOPS=23.6k, BW=92.2MiB/s (96.7MB/s)(462MiB/5003msec); 0 zone resets 00:17:12.241 slat (usec): min=3, max=119, avg= 8.34, stdev= 3.56 00:17:12.241 clat (usec): min=1510, max=3594, avg=2376.67, stdev=231.18 00:17:12.241 lat (usec): min=1514, max=3634, avg=2385.01, stdev=232.07 00:17:12.241 clat percentiles (usec): 00:17:12.241 | 1.00th=[ 1778], 5.00th=[ 1975], 10.00th=[ 2089], 20.00th=[ 2180], 00:17:12.241 | 30.00th=[ 2245], 40.00th=[ 2311], 50.00th=[ 2376], 60.00th=[ 2442], 00:17:12.241 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2671], 95.00th=[ 2737], 00:17:12.241 | 99.00th=[ 2802], 99.50th=[ 2835], 99.90th=[ 2966], 99.95th=[ 3130], 00:17:12.241 | 99.99th=[ 3490] 00:17:12.241 bw ( KiB/s): min=91648, max=95744, per=99.37%, avg=93866.67, stdev=1470.61, samples=9 00:17:12.241 iops : min=22912, max=23936, avg=23466.67, stdev=367.65, samples=9 00:17:12.241 lat (msec) : 2=6.04%, 4=93.96% 00:17:12.241 cpu : usr=43.70%, sys=52.50%, ctx=14, majf=0, minf=762 00:17:12.241 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:12.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:12.241 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:12.241 issued rwts: total=0,118144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:12.241 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:12.241 00:17:12.241 Run status group 0 (all jobs): 00:17:12.241 WRITE: bw=92.2MiB/s (96.7MB/s), 92.2MiB/s-92.2MiB/s (96.7MB/s-96.7MB/s), io=462MiB (484MB), run=5003-5003msec 00:17:12.809 ----------------------------------------------------- 00:17:12.809 Suppressions used: 00:17:12.809 count bytes template 00:17:12.809 1 11 /usr/src/fio/parse.c 00:17:12.809 1 8 libtcmalloc_minimal.so 00:17:12.809 1 904 libcrypto.so 00:17:12.809 ----------------------------------------------------- 00:17:12.809 00:17:12.809 00:17:12.809 real 0m14.606s 00:17:12.809 user 0m8.367s 00:17:12.809 sys 0m5.610s 00:17:12.809 10:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.809 10:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:12.809 ************************************ 00:17:12.809 END TEST xnvme_fio_plugin 00:17:12.809 ************************************ 00:17:12.809 10:54:01 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72699 00:17:12.809 10:54:01 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72699 ']' 00:17:12.809 10:54:01 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72699 00:17:12.809 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72699) - No such process 00:17:12.809 Process with pid 72699 is not found 00:17:12.809 10:54:01 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72699 is not found' 00:17:12.809 00:17:12.809 real 3m47.712s 00:17:12.809 user 2m1.067s 00:17:12.809 sys 1m30.937s 00:17:12.809 10:54:01 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.809 10:54:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.809 ************************************ 00:17:12.809 END TEST nvme_xnvme 00:17:12.809 ************************************ 00:17:12.809 10:54:01 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:12.809 10:54:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.809 10:54:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.809 10:54:01 -- common/autotest_common.sh@10 -- # set +x 00:17:12.809 ************************************ 00:17:12.809 START TEST blockdev_xnvme 00:17:12.809 ************************************ 00:17:12.809 10:54:01 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:12.809 * Looking for test storage... 00:17:13.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:13.067 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:13.067 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:17:13.067 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:13.067 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:17:13.067 10:54:02 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.068 10:54:02 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:13.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.068 --rc genhtml_branch_coverage=1 00:17:13.068 --rc genhtml_function_coverage=1 00:17:13.068 --rc genhtml_legend=1 00:17:13.068 --rc geninfo_all_blocks=1 00:17:13.068 --rc geninfo_unexecuted_blocks=1 00:17:13.068 00:17:13.068 ' 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:13.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.068 --rc genhtml_branch_coverage=1 00:17:13.068 --rc genhtml_function_coverage=1 00:17:13.068 --rc genhtml_legend=1 00:17:13.068 --rc geninfo_all_blocks=1 00:17:13.068 --rc geninfo_unexecuted_blocks=1 00:17:13.068 00:17:13.068 ' 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:13.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.068 --rc genhtml_branch_coverage=1 00:17:13.068 --rc genhtml_function_coverage=1 00:17:13.068 --rc genhtml_legend=1 00:17:13.068 --rc geninfo_all_blocks=1 00:17:13.068 --rc geninfo_unexecuted_blocks=1 00:17:13.068 00:17:13.068 ' 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:13.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.068 --rc genhtml_branch_coverage=1 00:17:13.068 --rc genhtml_function_coverage=1 00:17:13.068 --rc genhtml_legend=1 00:17:13.068 --rc geninfo_all_blocks=1 00:17:13.068 --rc geninfo_unexecuted_blocks=1 00:17:13.068 00:17:13.068 ' 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73383 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:13.068 10:54:02 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73383 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73383 ']' 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.068 10:54:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:13.068 [2024-11-20 10:54:02.279319] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:13.068 [2024-11-20 10:54:02.279663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73383 ] 00:17:13.327 [2024-11-20 10:54:02.457545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.327 [2024-11-20 10:54:02.563309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.263 10:54:03 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.263 10:54:03 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:17:14.263 10:54:03 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:14.263 10:54:03 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:17:14.263 10:54:03 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:17:14.263 10:54:03 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:17:14.263 10:54:03 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:14.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:15.511 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:15.511 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:15.511 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:17:15.771 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:15.771 10:54:04 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.771 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:17:15.772 nvme0n1 00:17:15.772 nvme0n2 00:17:15.772 nvme0n3 00:17:15.772 nvme1n1 00:17:15.772 nvme2n1 00:17:15.772 nvme3n1 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:15.772 10:54:04 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.772 10:54:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.032 10:54:05 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.032 10:54:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:16.032 10:54:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:16.033 10:54:05 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "65cca499-97f4-438c-91f7-34d1813146b0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65cca499-97f4-438c-91f7-34d1813146b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "a0d9adb4-aa8d-44b8-b108-dddcf019153c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a0d9adb4-aa8d-44b8-b108-dddcf019153c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5949bdbc-8318-4ff4-af89-cc4703496605"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5949bdbc-8318-4ff4-af89-cc4703496605",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9c0ce693-7d62-41c8-bec5-323f6442afff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9c0ce693-7d62-41c8-bec5-323f6442afff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ff9e9fed-8fd0-4c2b-af90-c45c80064f20"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ff9e9fed-8fd0-4c2b-af90-c45c80064f20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9f9d694d-d30a-409a-8019-3fce1f3186d7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9f9d694d-d30a-409a-8019-3fce1f3186d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:16.033 10:54:05 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:16.033 10:54:05 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:17:16.033 10:54:05 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:16.033 10:54:05 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73383 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73383 ']' 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73383 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73383 00:17:16.033 killing process with pid 73383 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73383' 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73383 00:17:16.033 10:54:05 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73383 00:17:18.567 10:54:07 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:18.567 10:54:07 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:18.567 10:54:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:18.567 10:54:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.567 10:54:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:18.567 ************************************ 00:17:18.567 START TEST bdev_hello_world 00:17:18.567 ************************************ 00:17:18.567 10:54:07 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:18.567 [2024-11-20 10:54:07.524337] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:18.567 [2024-11-20 10:54:07.524466] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73673 ] 00:17:18.567 [2024-11-20 10:54:07.703767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.567 [2024-11-20 10:54:07.813283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.135 [2024-11-20 10:54:08.244894] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:19.135 [2024-11-20 10:54:08.244936] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:19.135 [2024-11-20 10:54:08.244955] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:19.135 [2024-11-20 10:54:08.247014] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:19.135 [2024-11-20 10:54:08.247349] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:19.135 [2024-11-20 10:54:08.247373] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:19.135 [2024-11-20 10:54:08.247632] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:19.135 00:17:19.135 [2024-11-20 10:54:08.247661] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:20.073 ************************************ 00:17:20.073 END TEST bdev_hello_world 00:17:20.073 ************************************ 00:17:20.073 00:17:20.073 real 0m1.880s 00:17:20.073 user 0m1.505s 00:17:20.073 sys 0m0.257s 00:17:20.073 10:54:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.073 10:54:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:20.332 10:54:09 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:20.332 10:54:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.332 10:54:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.332 10:54:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:20.332 ************************************ 00:17:20.332 START TEST bdev_bounds 00:17:20.332 ************************************ 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:20.332 Process bdevio pid: 73717 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73717 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73717' 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73717 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73717 ']' 00:17:20.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.332 10:54:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:20.332 [2024-11-20 10:54:09.476469] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:20.332 [2024-11-20 10:54:09.476601] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73717 ] 00:17:20.591 [2024-11-20 10:54:09.658093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.591 [2024-11-20 10:54:09.767862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.591 [2024-11-20 10:54:09.767997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.591 [2024-11-20 10:54:09.768027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.159 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.159 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:21.159 10:54:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:21.159 I/O targets: 00:17:21.159 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:21.159 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:21.159 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:21.159 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:21.159 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:21.159 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:21.159 00:17:21.159 00:17:21.159 CUnit - A unit testing framework for C - Version 2.1-3 00:17:21.159 http://cunit.sourceforge.net/ 00:17:21.159 00:17:21.159 00:17:21.159 Suite: bdevio tests on: nvme3n1 00:17:21.159 Test: blockdev write read block ...passed 00:17:21.159 Test: blockdev write zeroes read block ...passed 00:17:21.159 Test: blockdev write zeroes read no split ...passed 00:17:21.419 Test: blockdev write zeroes read split ...passed 00:17:21.419 Test: blockdev write zeroes read split partial ...passed 00:17:21.419 Test: blockdev reset ...passed 00:17:21.419 Test: blockdev write read 8 blocks ...passed 00:17:21.419 Test: blockdev write read size > 128k ...passed 00:17:21.419 Test: blockdev write read invalid size ...passed 00:17:21.419 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.419 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.419 Test: blockdev write read max offset ...passed 00:17:21.419 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.419 Test: blockdev writev readv 8 blocks ...passed 00:17:21.419 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.419 Test: blockdev writev readv block ...passed 00:17:21.419 Test: blockdev writev readv size > 128k ...passed 00:17:21.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.419 Test: blockdev comparev and writev ...passed 00:17:21.419 Test: blockdev nvme passthru rw ...passed 00:17:21.419 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.419 Test: blockdev nvme admin passthru ...passed 00:17:21.419 Test: blockdev copy ...passed 00:17:21.419 Suite: bdevio tests on: nvme2n1 00:17:21.419 Test: blockdev write read block ...passed 00:17:21.419 Test: blockdev write zeroes read block ...passed 00:17:21.419 Test: blockdev write zeroes read no split ...passed 00:17:21.419 Test: blockdev write zeroes read split ...passed 00:17:21.419 Test: blockdev write zeroes read split partial ...passed 00:17:21.419 Test: blockdev reset ...passed 00:17:21.419 Test: blockdev write read 8 blocks ...passed 00:17:21.419 Test: blockdev write read size > 128k ...passed 00:17:21.419 Test: blockdev write read invalid size ...passed 00:17:21.419 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.419 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.419 Test: blockdev write read max offset ...passed 00:17:21.419 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.419 Test: blockdev writev readv 8 blocks ...passed 00:17:21.419 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.419 Test: blockdev writev readv block ...passed 00:17:21.419 Test: blockdev writev readv size > 128k ...passed 00:17:21.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.419 Test: blockdev comparev and writev ...passed 00:17:21.419 Test: blockdev nvme passthru rw ...passed 00:17:21.419 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.419 Test: blockdev nvme admin passthru ...passed 00:17:21.419 Test: blockdev copy ...passed 00:17:21.419 Suite: bdevio tests on: nvme1n1 00:17:21.419 Test: blockdev write read block ...passed 00:17:21.419 Test: blockdev write zeroes read block ...passed 00:17:21.419 Test: blockdev write zeroes read no split ...passed 00:17:21.419 Test: blockdev write zeroes read split ...passed 00:17:21.419 Test: blockdev write zeroes read split partial ...passed 00:17:21.419 Test: blockdev reset ...passed 00:17:21.419 Test: blockdev write read 8 blocks ...passed 00:17:21.419 Test: blockdev write read size > 128k ...passed 00:17:21.419 Test: blockdev write read invalid size ...passed 00:17:21.419 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.419 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.419 Test: blockdev write read max offset ...passed 00:17:21.419 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.419 Test: blockdev writev readv 8 blocks ...passed 00:17:21.419 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.419 Test: blockdev writev readv block ...passed 00:17:21.419 Test: blockdev writev readv size > 128k ...passed 00:17:21.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.419 Test: blockdev comparev and writev ...passed 00:17:21.419 Test: blockdev nvme passthru rw ...passed 00:17:21.419 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.419 Test: blockdev nvme admin passthru ...passed 00:17:21.419 Test: blockdev copy ...passed 00:17:21.419 Suite: bdevio tests on: nvme0n3 00:17:21.419 Test: blockdev write read block ...passed 00:17:21.419 Test: blockdev write zeroes read block ...passed 00:17:21.419 Test: blockdev write zeroes read no split ...passed 00:17:21.678 Test: blockdev write zeroes read split ...passed 00:17:21.678 Test: blockdev write zeroes read split partial ...passed 00:17:21.678 Test: blockdev reset ...passed 00:17:21.678 Test: blockdev write read 8 blocks ...passed 00:17:21.678 Test: blockdev write read size > 128k ...passed 00:17:21.678 Test: blockdev write read invalid size ...passed 00:17:21.678 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.678 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.678 Test: blockdev write read max offset ...passed 00:17:21.678 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.678 Test: blockdev writev readv 8 blocks ...passed 00:17:21.678 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.678 Test: blockdev writev readv block ...passed 00:17:21.678 Test: blockdev writev readv size > 128k ...passed 00:17:21.678 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.678 Test: blockdev comparev and writev ...passed 00:17:21.678 Test: blockdev nvme passthru rw ...passed 00:17:21.678 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.678 Test: blockdev nvme admin passthru ...passed 00:17:21.678 Test: blockdev copy ...passed 00:17:21.678 Suite: bdevio tests on: nvme0n2 00:17:21.678 Test: blockdev write read block ...passed 00:17:21.678 Test: blockdev write zeroes read block ...passed 00:17:21.678 Test: blockdev write zeroes read no split ...passed 00:17:21.678 Test: blockdev write zeroes read split ...passed 00:17:21.678 Test: blockdev write zeroes read split partial ...passed 00:17:21.678 Test: blockdev reset ...passed 00:17:21.678 Test: blockdev write read 8 blocks ...passed 00:17:21.678 Test: blockdev write read size > 128k ...passed 00:17:21.678 Test: blockdev write read invalid size ...passed 00:17:21.678 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.678 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.678 Test: blockdev write read max offset ...passed 00:17:21.678 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.678 Test: blockdev writev readv 8 blocks ...passed 00:17:21.678 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.678 Test: blockdev writev readv block ...passed 00:17:21.678 Test: blockdev writev readv size > 128k ...passed 00:17:21.678 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.678 Test: blockdev comparev and writev ...passed 00:17:21.678 Test: blockdev nvme passthru rw ...passed 00:17:21.678 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.678 Test: blockdev nvme admin passthru ...passed 00:17:21.678 Test: blockdev copy ...passed 00:17:21.678 Suite: bdevio tests on: nvme0n1 00:17:21.678 Test: blockdev write read block ...passed 00:17:21.678 Test: blockdev write zeroes read block ...passed 00:17:21.678 Test: blockdev write zeroes read no split ...passed 00:17:21.678 Test: blockdev write zeroes read split ...passed 00:17:21.678 Test: blockdev write zeroes read split partial ...passed 00:17:21.678 Test: blockdev reset ...passed 00:17:21.678 Test: blockdev write read 8 blocks ...passed 00:17:21.678 Test: blockdev write read size > 128k ...passed 00:17:21.678 Test: blockdev write read invalid size ...passed 00:17:21.678 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.678 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.678 Test: blockdev write read max offset ...passed 00:17:21.678 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.678 Test: blockdev writev readv 8 blocks ...passed 00:17:21.678 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.678 Test: blockdev writev readv block ...passed 00:17:21.678 Test: blockdev writev readv size > 128k ...passed 00:17:21.678 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.678 Test: blockdev comparev and writev ...passed 00:17:21.678 Test: blockdev nvme passthru rw ...passed 00:17:21.678 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.678 Test: blockdev nvme admin passthru ...passed 00:17:21.678 Test: blockdev copy ...passed 00:17:21.678 00:17:21.678 Run Summary: Type Total Ran Passed Failed Inactive 00:17:21.678 suites 6 6 n/a 0 0 00:17:21.678 tests 138 138 138 0 0 00:17:21.679 asserts 780 780 780 0 n/a 00:17:21.679 00:17:21.679 Elapsed time = 1.278 seconds 00:17:21.679 0 00:17:21.679 10:54:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73717 00:17:21.679 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73717 ']' 00:17:21.679 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73717 00:17:21.679 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:21.679 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.679 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73717 00:17:21.937 killing process with pid 73717 00:17:21.937 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.937 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.937 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73717' 00:17:21.937 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73717 00:17:21.937 10:54:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73717 00:17:22.873 10:54:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:22.873 00:17:22.873 real 0m2.642s 00:17:22.873 user 0m6.564s 00:17:22.873 sys 0m0.375s 00:17:22.873 ************************************ 00:17:22.873 END TEST bdev_bounds 00:17:22.873 ************************************ 00:17:22.873 10:54:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.873 10:54:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:22.873 10:54:12 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:22.873 10:54:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:22.873 10:54:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.874 10:54:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.874 ************************************ 00:17:22.874 START TEST bdev_nbd 00:17:22.874 ************************************ 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:22.874 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73775 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:23.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73775 /var/tmp/spdk-nbd.sock 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73775 ']' 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.132 10:54:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:23.132 [2024-11-20 10:54:12.214752] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:23.132 [2024-11-20 10:54:12.214868] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.390 [2024-11-20 10:54:12.395501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.390 [2024-11-20 10:54:12.504076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:23.958 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.218 1+0 records in 00:17:24.218 1+0 records out 00:17:24.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705047 s, 5.8 MB/s 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.218 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.477 1+0 records in 00:17:24.477 1+0 records out 00:17:24.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679688 s, 6.0 MB/s 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.477 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.736 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.736 1+0 records in 00:17:24.736 1+0 records out 00:17:24.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688879 s, 5.9 MB/s 00:17:24.737 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.737 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.737 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.737 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.737 10:54:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.737 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.737 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.737 10:54:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.995 1+0 records in 00:17:24.995 1+0 records out 00:17:24.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765321 s, 5.4 MB/s 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.995 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.254 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.255 1+0 records in 00:17:25.255 1+0 records out 00:17:25.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643382 s, 6.4 MB/s 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:25.255 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.513 1+0 records in 00:17:25.513 1+0 records out 00:17:25.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079313 s, 5.2 MB/s 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:25.513 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:25.513 { 00:17:25.513 "nbd_device": "/dev/nbd0", 00:17:25.513 "bdev_name": "nvme0n1" 00:17:25.513 }, 00:17:25.513 { 00:17:25.513 "nbd_device": "/dev/nbd1", 00:17:25.513 "bdev_name": "nvme0n2" 00:17:25.513 }, 00:17:25.513 { 00:17:25.513 "nbd_device": "/dev/nbd2", 00:17:25.513 "bdev_name": "nvme0n3" 00:17:25.513 }, 00:17:25.513 { 00:17:25.513 "nbd_device": "/dev/nbd3", 00:17:25.513 "bdev_name": "nvme1n1" 00:17:25.513 }, 00:17:25.513 { 00:17:25.513 "nbd_device": "/dev/nbd4", 00:17:25.513 "bdev_name": "nvme2n1" 00:17:25.513 }, 00:17:25.513 { 00:17:25.513 "nbd_device": "/dev/nbd5", 00:17:25.513 "bdev_name": "nvme3n1" 00:17:25.514 } 00:17:25.514 ]' 00:17:25.514 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:25.514 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:25.514 { 00:17:25.514 "nbd_device": "/dev/nbd0", 00:17:25.514 "bdev_name": "nvme0n1" 00:17:25.514 }, 00:17:25.514 { 00:17:25.514 "nbd_device": "/dev/nbd1", 00:17:25.514 "bdev_name": "nvme0n2" 00:17:25.514 }, 00:17:25.514 { 00:17:25.514 "nbd_device": "/dev/nbd2", 00:17:25.514 "bdev_name": "nvme0n3" 00:17:25.514 }, 00:17:25.514 { 00:17:25.514 "nbd_device": "/dev/nbd3", 00:17:25.514 "bdev_name": "nvme1n1" 00:17:25.514 }, 00:17:25.514 { 00:17:25.514 "nbd_device": "/dev/nbd4", 00:17:25.514 "bdev_name": "nvme2n1" 00:17:25.514 }, 00:17:25.514 { 00:17:25.514 "nbd_device": "/dev/nbd5", 00:17:25.514 "bdev_name": "nvme3n1" 00:17:25.514 } 00:17:25.514 ]' 00:17:25.514 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:25.772 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:25.772 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:25.772 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:25.772 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.772 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:25.772 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.772 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:25.772 10:54:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.772 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.031 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.290 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.548 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.807 10:54:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.066 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:27.325 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.326 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:27.326 /dev/nbd0 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.585 1+0 records in 00:17:27.585 1+0 records out 00:17:27.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688453 s, 5.9 MB/s 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.585 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:17:27.585 /dev/nbd1 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.844 1+0 records in 00:17:27.844 1+0 records out 00:17:27.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069432 s, 5.9 MB/s 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.844 10:54:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:17:27.844 /dev/nbd10 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.102 1+0 records in 00:17:28.102 1+0 records out 00:17:28.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851096 s, 4.8 MB/s 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:17:28.102 /dev/nbd11 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.102 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.361 1+0 records in 00:17:28.361 1+0 records out 00:17:28.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641365 s, 6.4 MB/s 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:17:28.361 /dev/nbd12 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.361 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.620 1+0 records in 00:17:28.620 1+0 records out 00:17:28.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570001 s, 7.2 MB/s 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:28.620 /dev/nbd13 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.620 1+0 records in 00:17:28.620 1+0 records out 00:17:28.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664856 s, 6.2 MB/s 00:17:28.620 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:28.879 10:54:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:28.879 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd0", 00:17:28.879 "bdev_name": "nvme0n1" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd1", 00:17:28.879 "bdev_name": "nvme0n2" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd10", 00:17:28.879 "bdev_name": "nvme0n3" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd11", 00:17:28.879 "bdev_name": "nvme1n1" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd12", 00:17:28.879 "bdev_name": "nvme2n1" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd13", 00:17:28.879 "bdev_name": "nvme3n1" 00:17:28.879 } 00:17:28.879 ]' 00:17:28.879 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd0", 00:17:28.879 "bdev_name": "nvme0n1" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd1", 00:17:28.879 "bdev_name": "nvme0n2" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd10", 00:17:28.879 "bdev_name": "nvme0n3" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd11", 00:17:28.879 "bdev_name": "nvme1n1" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd12", 00:17:28.879 "bdev_name": "nvme2n1" 00:17:28.879 }, 00:17:28.879 { 00:17:28.879 "nbd_device": "/dev/nbd13", 00:17:28.879 "bdev_name": "nvme3n1" 00:17:28.879 } 00:17:28.879 ]' 00:17:28.879 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:29.139 /dev/nbd1 00:17:29.139 /dev/nbd10 00:17:29.139 /dev/nbd11 00:17:29.139 /dev/nbd12 00:17:29.139 /dev/nbd13' 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:29.139 /dev/nbd1 00:17:29.139 /dev/nbd10 00:17:29.139 /dev/nbd11 00:17:29.139 /dev/nbd12 00:17:29.139 /dev/nbd13' 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:29.139 256+0 records in 00:17:29.139 256+0 records out 00:17:29.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110236 s, 95.1 MB/s 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:29.139 256+0 records in 00:17:29.139 256+0 records out 00:17:29.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123032 s, 8.5 MB/s 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.139 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:29.398 256+0 records in 00:17:29.398 256+0 records out 00:17:29.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127327 s, 8.2 MB/s 00:17:29.398 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.398 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:29.398 256+0 records in 00:17:29.398 256+0 records out 00:17:29.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125221 s, 8.4 MB/s 00:17:29.398 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.398 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:29.656 256+0 records in 00:17:29.656 256+0 records out 00:17:29.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151998 s, 6.9 MB/s 00:17:29.656 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.656 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:29.656 256+0 records in 00:17:29.656 256+0 records out 00:17:29.656 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127768 s, 8.2 MB/s 00:17:29.656 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.656 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:29.915 256+0 records in 00:17:29.915 256+0 records out 00:17:29.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124331 s, 8.4 MB/s 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.915 10:54:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:29.915 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.916 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.174 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.432 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.691 10:54:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.949 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.208 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:31.466 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.467 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:31.467 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:31.726 malloc_lvol_verify 00:17:31.726 10:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:31.984 7d0b32f0-5078-4d25-a4ad-9609f27e3278 00:17:31.984 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:32.243 c56a911d-85cb-42b5-8020-f3154c71f6ed 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:32.243 /dev/nbd0 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:32.243 mke2fs 1.47.0 (5-Feb-2023) 00:17:32.243 Discarding device blocks: 0/4096 done 00:17:32.243 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:32.243 00:17:32.243 Allocating group tables: 0/1 done 00:17:32.243 Writing inode tables: 0/1 done 00:17:32.243 Creating journal (1024 blocks): done 00:17:32.243 Writing superblocks and filesystem accounting information: 0/1 done 00:17:32.243 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.243 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73775 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73775 ']' 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73775 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73775 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.501 killing process with pid 73775 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73775' 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73775 00:17:32.501 10:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73775 00:17:33.877 10:54:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:33.877 00:17:33.877 real 0m10.749s 00:17:33.877 user 0m13.658s 00:17:33.877 sys 0m4.624s 00:17:33.877 10:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.877 10:54:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:33.877 ************************************ 00:17:33.877 END TEST bdev_nbd 00:17:33.877 ************************************ 00:17:33.877 10:54:22 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:33.877 10:54:22 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:17:33.877 10:54:22 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:17:33.877 10:54:22 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:33.877 10:54:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:33.877 10:54:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.877 10:54:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:33.877 ************************************ 00:17:33.877 START TEST bdev_fio 00:17:33.877 ************************************ 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:33.877 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:33.877 10:54:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:33.877 ************************************ 00:17:33.877 START TEST bdev_fio_rw_verify 00:17:33.877 ************************************ 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:33.877 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:33.878 10:54:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.136 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.136 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.136 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.136 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.136 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.136 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.136 fio-3.35 00:17:34.136 Starting 6 threads 00:17:46.339 00:17:46.339 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74189: Wed Nov 20 10:54:34 2024 00:17:46.339 read: IOPS=32.3k, BW=126MiB/s (133MB/s)(1264MiB/10001msec) 00:17:46.339 slat (usec): min=2, max=1043, avg= 6.02, stdev= 3.94 00:17:46.339 clat (usec): min=98, max=3129, avg=610.46, stdev=152.90 00:17:46.339 lat (usec): min=104, max=3135, avg=616.48, stdev=153.52 00:17:46.339 clat percentiles (usec): 00:17:46.339 | 50.000th=[ 644], 99.000th=[ 938], 99.900th=[ 1336], 99.990th=[ 2474], 00:17:46.339 | 99.999th=[ 3130] 00:17:46.339 write: IOPS=32.8k, BW=128MiB/s (134MB/s)(1283MiB/10001msec); 0 zone resets 00:17:46.339 slat (usec): min=11, max=1001, avg=18.03, stdev=15.49 00:17:46.339 clat (usec): min=81, max=3514, avg=672.42, stdev=148.92 00:17:46.339 lat (usec): min=97, max=3531, avg=690.45, stdev=149.49 00:17:46.339 clat percentiles (usec): 00:17:46.339 | 50.000th=[ 685], 99.000th=[ 1074], 99.900th=[ 1500], 99.990th=[ 2343], 00:17:46.339 | 99.999th=[ 2900] 00:17:46.339 bw ( KiB/s): min=110328, max=146968, per=99.61%, avg=130822.00, stdev=2111.86, samples=114 00:17:46.339 iops : min=27582, max=36742, avg=32705.47, stdev=527.97, samples=114 00:17:46.339 lat (usec) : 100=0.01%, 250=2.65%, 500=11.15%, 750=70.73%, 1000=14.39% 00:17:46.339 lat (msec) : 2=1.06%, 4=0.03% 00:17:46.339 cpu : usr=68.45%, sys=21.96%, ctx=7290, majf=0, minf=27093 00:17:46.339 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:46.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.339 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.339 issued rwts: total=323526,328361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.339 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:46.339 00:17:46.339 Run status group 0 (all jobs): 00:17:46.339 READ: bw=126MiB/s (133MB/s), 126MiB/s-126MiB/s (133MB/s-133MB/s), io=1264MiB (1325MB), run=10001-10001msec 00:17:46.339 WRITE: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=1283MiB (1345MB), run=10001-10001msec 00:17:46.339 ----------------------------------------------------- 00:17:46.339 Suppressions used: 00:17:46.339 count bytes template 00:17:46.339 6 48 /usr/src/fio/parse.c 00:17:46.339 4583 439968 /usr/src/fio/iolog.c 00:17:46.339 1 8 libtcmalloc_minimal.so 00:17:46.339 1 904 libcrypto.so 00:17:46.339 ----------------------------------------------------- 00:17:46.339 00:17:46.339 00:17:46.339 real 0m12.429s 00:17:46.339 user 0m42.997s 00:17:46.339 sys 0m13.591s 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:46.339 ************************************ 00:17:46.339 END TEST bdev_fio_rw_verify 00:17:46.339 ************************************ 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:46.339 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:46.340 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:46.340 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "65cca499-97f4-438c-91f7-34d1813146b0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "65cca499-97f4-438c-91f7-34d1813146b0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "a0d9adb4-aa8d-44b8-b108-dddcf019153c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a0d9adb4-aa8d-44b8-b108-dddcf019153c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "5949bdbc-8318-4ff4-af89-cc4703496605"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5949bdbc-8318-4ff4-af89-cc4703496605",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "9c0ce693-7d62-41c8-bec5-323f6442afff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9c0ce693-7d62-41c8-bec5-323f6442afff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ff9e9fed-8fd0-4c2b-af90-c45c80064f20"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ff9e9fed-8fd0-4c2b-af90-c45c80064f20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9f9d694d-d30a-409a-8019-3fce1f3186d7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9f9d694d-d30a-409a-8019-3fce1f3186d7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:46.599 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:46.599 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.599 /home/vagrant/spdk_repo/spdk 00:17:46.599 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:46.599 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:46.599 10:54:35 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:46.599 00:17:46.599 real 0m12.662s 00:17:46.599 user 0m43.106s 00:17:46.599 sys 0m13.717s 00:17:46.599 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.599 10:54:35 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:46.599 ************************************ 00:17:46.599 END TEST bdev_fio 00:17:46.599 ************************************ 00:17:46.599 10:54:35 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:46.599 10:54:35 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:46.599 10:54:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:46.599 10:54:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.599 10:54:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.599 ************************************ 00:17:46.599 START TEST bdev_verify 00:17:46.599 ************************************ 00:17:46.599 10:54:35 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:46.599 [2024-11-20 10:54:35.766911] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:46.599 [2024-11-20 10:54:35.767025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74359 ] 00:17:46.858 [2024-11-20 10:54:35.945310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.858 [2024-11-20 10:54:36.059099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.858 [2024-11-20 10:54:36.059129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.427 Running I/O for 5 seconds... 00:17:49.741 25184.00 IOPS, 98.38 MiB/s [2024-11-20T10:54:39.929Z] 23792.00 IOPS, 92.94 MiB/s [2024-11-20T10:54:40.871Z] 24170.67 IOPS, 94.42 MiB/s [2024-11-20T10:54:41.805Z] 24320.00 IOPS, 95.00 MiB/s [2024-11-20T10:54:41.805Z] 24294.40 IOPS, 94.90 MiB/s 00:17:52.552 Latency(us) 00:17:52.552 [2024-11-20T10:54:41.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.552 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x0 length 0x80000 00:17:52.552 nvme0n1 : 5.04 1804.47 7.05 0.00 0.00 70802.24 11054.27 64009.46 00:17:52.552 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x80000 length 0x80000 00:17:52.552 nvme0n1 : 5.06 1897.32 7.41 0.00 0.00 67360.55 14739.02 60219.42 00:17:52.552 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x0 length 0x80000 00:17:52.552 nvme0n2 : 5.07 1792.90 7.00 0.00 0.00 71124.96 12422.89 63588.34 00:17:52.552 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x80000 length 0x80000 00:17:52.552 nvme0n2 : 5.07 1918.21 7.49 0.00 0.00 66532.06 10106.76 63588.34 00:17:52.552 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x0 length 0x80000 00:17:52.552 nvme0n3 : 5.04 1803.77 7.05 0.00 0.00 70558.58 11370.10 69905.07 00:17:52.552 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x80000 length 0x80000 00:17:52.552 nvme0n3 : 5.06 1896.04 7.41 0.00 0.00 67205.06 10633.15 57692.74 00:17:52.552 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x0 length 0xbd0bd 00:17:52.552 nvme1n1 : 5.08 2656.24 10.38 0.00 0.00 47804.08 5711.37 57271.62 00:17:52.552 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:52.552 nvme1n1 : 5.07 2712.63 10.60 0.00 0.00 46858.21 5711.37 52849.91 00:17:52.552 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x0 length 0x20000 00:17:52.552 nvme2n1 : 5.09 1812.32 7.08 0.00 0.00 69906.59 5737.69 67799.49 00:17:52.552 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x20000 length 0x20000 00:17:52.552 nvme2n1 : 5.08 1941.49 7.58 0.00 0.00 65347.81 8422.30 61061.65 00:17:52.552 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0x0 length 0xa0000 00:17:52.552 nvme3n1 : 5.09 1811.85 7.08 0.00 0.00 69851.36 6422.00 69905.07 00:17:52.552 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.552 Verification LBA range: start 0xa0000 length 0xa0000 00:17:52.552 nvme3n1 : 5.08 1915.86 7.48 0.00 0.00 66161.44 3092.56 60219.42 00:17:52.552 [2024-11-20T10:54:41.805Z] =================================================================================================================== 00:17:52.552 [2024-11-20T10:54:41.805Z] Total : 23963.10 93.61 0.00 0.00 63686.58 3092.56 69905.07 00:17:53.488 00:17:53.488 real 0m7.061s 00:17:53.488 user 0m10.677s 00:17:53.488 sys 0m2.115s 00:17:53.488 10:54:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.488 10:54:42 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:53.488 ************************************ 00:17:53.488 END TEST bdev_verify 00:17:53.488 ************************************ 00:17:53.747 10:54:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:53.747 10:54:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:53.747 10:54:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.747 10:54:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:53.747 ************************************ 00:17:53.747 START TEST bdev_verify_big_io 00:17:53.747 ************************************ 00:17:53.747 10:54:42 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:53.747 [2024-11-20 10:54:42.904388] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:17:53.747 [2024-11-20 10:54:42.904520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74459 ] 00:17:54.005 [2024-11-20 10:54:43.085965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:54.005 [2024-11-20 10:54:43.194505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.005 [2024-11-20 10:54:43.194535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.571 Running I/O for 5 seconds... 00:17:59.637 1360.00 IOPS, 85.00 MiB/s [2024-11-20T10:54:49.826Z] 3000.00 IOPS, 187.50 MiB/s [2024-11-20T10:54:49.826Z] 3890.33 IOPS, 243.15 MiB/s 00:18:00.573 Latency(us) 00:18:00.573 [2024-11-20T10:54:49.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.573 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x0 length 0x8000 00:18:00.573 nvme0n1 : 5.64 156.13 9.76 0.00 0.00 801786.42 91803.04 1057840.53 00:18:00.573 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x8000 length 0x8000 00:18:00.573 nvme0n1 : 5.45 187.97 11.75 0.00 0.00 657351.97 52639.36 788327.02 00:18:00.573 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x0 length 0x8000 00:18:00.573 nvme0n2 : 5.59 183.08 11.44 0.00 0.00 661984.85 16423.48 656939.18 00:18:00.573 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x8000 length 0x8000 00:18:00.573 nvme0n2 : 5.52 185.36 11.58 0.00 0.00 650818.52 5000.74 848967.56 00:18:00.573 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x0 length 0x8000 00:18:00.573 nvme0n3 : 5.60 133.48 8.34 0.00 0.00 883193.43 45480.40 2304340.51 00:18:00.573 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x8000 length 0x8000 00:18:00.573 nvme0n3 : 5.53 159.25 9.95 0.00 0.00 738763.47 84222.97 1118481.07 00:18:00.573 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x0 length 0xbd0b 00:18:00.573 nvme1n1 : 5.64 226.85 14.18 0.00 0.00 513465.00 9264.53 579454.05 00:18:00.573 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:00.573 nvme1n1 : 5.61 196.96 12.31 0.00 0.00 592476.57 25477.45 1286927.01 00:18:00.573 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x0 length 0x2000 00:18:00.573 nvme2n1 : 5.70 187.63 11.73 0.00 0.00 600264.62 9001.33 1455372.95 00:18:00.573 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x2000 length 0x2000 00:18:00.573 nvme2n1 : 5.61 171.12 10.69 0.00 0.00 666243.01 1895.02 1361043.23 00:18:00.573 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0x0 length 0xa000 00:18:00.573 nvme3n1 : 5.70 173.89 10.87 0.00 0.00 641013.53 1164.65 1718148.63 00:18:00.573 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.573 Verification LBA range: start 0xa000 length 0xa000 00:18:00.573 nvme3n1 : 5.71 201.86 12.62 0.00 0.00 551847.56 352.03 670414.86 00:18:00.573 [2024-11-20T10:54:49.826Z] =================================================================================================================== 00:18:00.573 [2024-11-20T10:54:49.826Z] Total : 2163.58 135.22 0.00 0.00 650715.87 352.03 2304340.51 00:18:01.949 00:18:01.949 real 0m8.031s 00:18:01.949 user 0m14.588s 00:18:01.949 sys 0m0.560s 00:18:01.949 10:54:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.949 10:54:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:01.949 ************************************ 00:18:01.949 END TEST bdev_verify_big_io 00:18:01.949 ************************************ 00:18:01.949 10:54:50 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:01.949 10:54:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:01.949 10:54:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.949 10:54:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:01.949 ************************************ 00:18:01.949 START TEST bdev_write_zeroes 00:18:01.949 ************************************ 00:18:01.949 10:54:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:01.949 [2024-11-20 10:54:51.018580] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:01.949 [2024-11-20 10:54:51.018728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74569 ] 00:18:01.949 [2024-11-20 10:54:51.198266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.209 [2024-11-20 10:54:51.302991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.775 Running I/O for 1 seconds... 00:18:03.711 51616.00 IOPS, 201.62 MiB/s 00:18:03.711 Latency(us) 00:18:03.711 [2024-11-20T10:54:52.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.711 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:03.711 nvme0n1 : 1.03 7920.88 30.94 0.00 0.00 16141.87 8422.30 45059.29 00:18:03.711 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:03.711 nvme0n2 : 1.04 7904.45 30.88 0.00 0.00 16168.51 8317.02 45690.96 00:18:03.711 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:03.711 nvme0n3 : 1.04 7888.60 30.81 0.00 0.00 16191.26 8211.74 46533.19 00:18:03.711 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:03.711 nvme1n1 : 1.05 11011.89 43.02 0.00 0.00 11589.79 4790.18 27372.47 00:18:03.711 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:03.711 nvme2n1 : 1.04 7841.97 30.63 0.00 0.00 16188.27 4026.91 44217.06 00:18:03.711 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:03.711 nvme3n1 : 1.05 7822.03 30.55 0.00 0.00 16207.97 3289.96 45269.85 00:18:03.711 [2024-11-20T10:54:52.964Z] =================================================================================================================== 00:18:03.711 [2024-11-20T10:54:52.964Z] Total : 50389.83 196.84 0.00 0.00 15169.88 3289.96 46533.19 00:18:04.647 00:18:04.647 real 0m2.961s 00:18:04.647 user 0m2.179s 00:18:04.647 sys 0m0.586s 00:18:04.647 10:54:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.647 10:54:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:04.647 ************************************ 00:18:04.647 END TEST bdev_write_zeroes 00:18:04.647 ************************************ 00:18:04.906 10:54:53 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:04.906 10:54:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:04.906 10:54:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.906 10:54:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:04.906 ************************************ 00:18:04.906 START TEST bdev_json_nonenclosed 00:18:04.906 ************************************ 00:18:04.906 10:54:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:04.906 [2024-11-20 10:54:54.052851] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:04.906 [2024-11-20 10:54:54.052977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74628 ] 00:18:05.165 [2024-11-20 10:54:54.232869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.165 [2024-11-20 10:54:54.336002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.165 [2024-11-20 10:54:54.336093] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:05.165 [2024-11-20 10:54:54.336114] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:05.165 [2024-11-20 10:54:54.336126] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:05.423 00:18:05.423 real 0m0.623s 00:18:05.423 user 0m0.386s 00:18:05.423 sys 0m0.132s 00:18:05.423 10:54:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.423 10:54:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:05.423 ************************************ 00:18:05.423 END TEST bdev_json_nonenclosed 00:18:05.423 ************************************ 00:18:05.423 10:54:54 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:05.423 10:54:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:05.423 10:54:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.423 10:54:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:05.423 ************************************ 00:18:05.423 START TEST bdev_json_nonarray 00:18:05.423 ************************************ 00:18:05.423 10:54:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:05.681 [2024-11-20 10:54:54.750832] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:05.681 [2024-11-20 10:54:54.750942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74659 ] 00:18:05.681 [2024-11-20 10:54:54.931135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.939 [2024-11-20 10:54:55.038163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.939 [2024-11-20 10:54:55.038256] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:05.939 [2024-11-20 10:54:55.038278] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:05.939 [2024-11-20 10:54:55.038290] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:06.197 00:18:06.197 real 0m0.621s 00:18:06.197 user 0m0.377s 00:18:06.197 sys 0m0.140s 00:18:06.197 10:54:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.197 ************************************ 00:18:06.197 10:54:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:06.197 END TEST bdev_json_nonarray 00:18:06.197 ************************************ 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:06.197 10:54:55 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:07.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:07.700 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.700 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.958 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.958 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:07.958 00:18:07.958 real 0m55.234s 00:18:07.959 user 1m39.796s 00:18:07.959 sys 0m26.229s 00:18:07.959 10:54:57 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.959 ************************************ 00:18:07.959 END TEST blockdev_xnvme 00:18:07.959 ************************************ 00:18:07.959 10:54:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:08.217 10:54:57 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:08.217 10:54:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.217 10:54:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.217 10:54:57 -- common/autotest_common.sh@10 -- # set +x 00:18:08.217 ************************************ 00:18:08.217 START TEST ublk 00:18:08.217 ************************************ 00:18:08.217 10:54:57 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:08.217 * Looking for test storage... 00:18:08.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:08.217 10:54:57 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.217 10:54:57 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.217 10:54:57 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.217 10:54:57 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.217 10:54:57 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.217 10:54:57 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.217 10:54:57 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.217 10:54:57 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.217 10:54:57 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.217 10:54:57 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.217 10:54:57 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.217 10:54:57 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.218 10:54:57 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.218 10:54:57 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.218 10:54:57 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.218 10:54:57 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:08.218 10:54:57 ublk -- scripts/common.sh@345 -- # : 1 00:18:08.218 10:54:57 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.218 10:54:57 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.218 10:54:57 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:08.218 10:54:57 ublk -- scripts/common.sh@353 -- # local d=1 00:18:08.218 10:54:57 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.218 10:54:57 ublk -- scripts/common.sh@355 -- # echo 1 00:18:08.218 10:54:57 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.218 10:54:57 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:08.218 10:54:57 ublk -- scripts/common.sh@353 -- # local d=2 00:18:08.218 10:54:57 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.218 10:54:57 ublk -- scripts/common.sh@355 -- # echo 2 00:18:08.218 10:54:57 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.218 10:54:57 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.218 10:54:57 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.218 10:54:57 ublk -- scripts/common.sh@368 -- # return 0 00:18:08.218 10:54:57 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.218 10:54:57 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.218 --rc genhtml_branch_coverage=1 00:18:08.218 --rc genhtml_function_coverage=1 00:18:08.218 --rc genhtml_legend=1 00:18:08.218 --rc geninfo_all_blocks=1 00:18:08.218 --rc geninfo_unexecuted_blocks=1 00:18:08.218 00:18:08.218 ' 00:18:08.218 10:54:57 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.218 --rc genhtml_branch_coverage=1 00:18:08.218 --rc genhtml_function_coverage=1 00:18:08.218 --rc genhtml_legend=1 00:18:08.218 --rc geninfo_all_blocks=1 00:18:08.218 --rc geninfo_unexecuted_blocks=1 00:18:08.218 00:18:08.218 ' 00:18:08.218 10:54:57 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.218 --rc genhtml_branch_coverage=1 00:18:08.218 --rc genhtml_function_coverage=1 00:18:08.218 --rc genhtml_legend=1 00:18:08.218 --rc geninfo_all_blocks=1 00:18:08.218 --rc geninfo_unexecuted_blocks=1 00:18:08.218 00:18:08.218 ' 00:18:08.218 10:54:57 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.218 --rc genhtml_branch_coverage=1 00:18:08.218 --rc genhtml_function_coverage=1 00:18:08.218 --rc genhtml_legend=1 00:18:08.218 --rc geninfo_all_blocks=1 00:18:08.218 --rc geninfo_unexecuted_blocks=1 00:18:08.218 00:18:08.218 ' 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:08.477 10:54:57 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:08.477 10:54:57 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:08.477 10:54:57 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:08.477 10:54:57 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:08.477 10:54:57 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:08.477 10:54:57 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:08.477 10:54:57 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:08.477 10:54:57 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:08.477 10:54:57 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:08.477 10:54:57 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:08.477 10:54:57 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.477 10:54:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:08.477 ************************************ 00:18:08.477 START TEST test_save_ublk_config 00:18:08.477 ************************************ 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=74944 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 74944 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 74944 ']' 00:18:08.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:08.477 10:54:57 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:08.477 [2024-11-20 10:54:57.621685] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:08.477 [2024-11-20 10:54:57.621845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74944 ] 00:18:08.736 [2024-11-20 10:54:57.803883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.736 [2024-11-20 10:54:57.910932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.671 10:54:58 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.671 10:54:58 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:09.671 10:54:58 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:09.671 10:54:58 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:09.671 10:54:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.671 10:54:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:09.671 [2024-11-20 10:54:58.813620] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:09.671 [2024-11-20 10:54:58.814771] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:09.671 malloc0 00:18:09.671 [2024-11-20 10:54:58.899742] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:09.671 [2024-11-20 10:54:58.899828] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:09.671 [2024-11-20 10:54:58.899841] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:09.671 [2024-11-20 10:54:58.899849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:09.671 [2024-11-20 10:54:58.907645] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:09.671 [2024-11-20 10:54:58.907666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:09.671 [2024-11-20 10:54:58.915629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:09.671 [2024-11-20 10:54:58.915731] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:09.930 [2024-11-20 10:54:58.939634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:09.930 0 00:18:09.930 10:54:58 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.930 10:54:58 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:09.930 10:54:58 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.930 10:54:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:10.188 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.188 10:54:59 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:10.188 "subsystems": [ 00:18:10.188 { 00:18:10.188 "subsystem": "fsdev", 00:18:10.188 "config": [ 00:18:10.188 { 00:18:10.188 "method": "fsdev_set_opts", 00:18:10.188 "params": { 00:18:10.188 "fsdev_io_pool_size": 65535, 00:18:10.188 "fsdev_io_cache_size": 256 00:18:10.188 } 00:18:10.188 } 00:18:10.188 ] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "keyring", 00:18:10.188 "config": [] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "iobuf", 00:18:10.188 "config": [ 00:18:10.188 { 00:18:10.188 "method": "iobuf_set_options", 00:18:10.188 "params": { 00:18:10.188 "small_pool_count": 8192, 00:18:10.188 "large_pool_count": 1024, 00:18:10.188 "small_bufsize": 8192, 00:18:10.188 "large_bufsize": 135168, 00:18:10.188 "enable_numa": false 00:18:10.188 } 00:18:10.188 } 00:18:10.188 ] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "sock", 00:18:10.188 "config": [ 00:18:10.188 { 00:18:10.188 "method": "sock_set_default_impl", 00:18:10.188 "params": { 00:18:10.188 "impl_name": "posix" 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "sock_impl_set_options", 00:18:10.188 "params": { 00:18:10.188 "impl_name": "ssl", 00:18:10.188 "recv_buf_size": 4096, 00:18:10.188 "send_buf_size": 4096, 00:18:10.188 "enable_recv_pipe": true, 00:18:10.188 "enable_quickack": false, 00:18:10.188 "enable_placement_id": 0, 00:18:10.188 "enable_zerocopy_send_server": true, 00:18:10.188 "enable_zerocopy_send_client": false, 00:18:10.188 "zerocopy_threshold": 0, 00:18:10.188 "tls_version": 0, 00:18:10.188 "enable_ktls": false 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "sock_impl_set_options", 00:18:10.188 "params": { 00:18:10.188 "impl_name": "posix", 00:18:10.188 "recv_buf_size": 2097152, 00:18:10.188 "send_buf_size": 2097152, 00:18:10.188 "enable_recv_pipe": true, 00:18:10.188 "enable_quickack": false, 00:18:10.188 "enable_placement_id": 0, 00:18:10.188 "enable_zerocopy_send_server": true, 00:18:10.188 "enable_zerocopy_send_client": false, 00:18:10.188 "zerocopy_threshold": 0, 00:18:10.188 "tls_version": 0, 00:18:10.188 "enable_ktls": false 00:18:10.188 } 00:18:10.188 } 00:18:10.188 ] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "vmd", 00:18:10.188 "config": [] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "accel", 00:18:10.188 "config": [ 00:18:10.188 { 00:18:10.188 "method": "accel_set_options", 00:18:10.188 "params": { 00:18:10.188 "small_cache_size": 128, 00:18:10.188 "large_cache_size": 16, 00:18:10.188 "task_count": 2048, 00:18:10.188 "sequence_count": 2048, 00:18:10.188 "buf_count": 2048 00:18:10.188 } 00:18:10.188 } 00:18:10.188 ] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "bdev", 00:18:10.188 "config": [ 00:18:10.188 { 00:18:10.188 "method": "bdev_set_options", 00:18:10.188 "params": { 00:18:10.188 "bdev_io_pool_size": 65535, 00:18:10.188 "bdev_io_cache_size": 256, 00:18:10.188 "bdev_auto_examine": true, 00:18:10.188 "iobuf_small_cache_size": 128, 00:18:10.188 "iobuf_large_cache_size": 16 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "bdev_raid_set_options", 00:18:10.188 "params": { 00:18:10.188 "process_window_size_kb": 1024, 00:18:10.188 "process_max_bandwidth_mb_sec": 0 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "bdev_iscsi_set_options", 00:18:10.188 "params": { 00:18:10.188 "timeout_sec": 30 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "bdev_nvme_set_options", 00:18:10.188 "params": { 00:18:10.188 "action_on_timeout": "none", 00:18:10.188 "timeout_us": 0, 00:18:10.188 "timeout_admin_us": 0, 00:18:10.188 "keep_alive_timeout_ms": 10000, 00:18:10.188 "arbitration_burst": 0, 00:18:10.188 "low_priority_weight": 0, 00:18:10.188 "medium_priority_weight": 0, 00:18:10.188 "high_priority_weight": 0, 00:18:10.188 "nvme_adminq_poll_period_us": 10000, 00:18:10.188 "nvme_ioq_poll_period_us": 0, 00:18:10.188 "io_queue_requests": 0, 00:18:10.188 "delay_cmd_submit": true, 00:18:10.188 "transport_retry_count": 4, 00:18:10.188 "bdev_retry_count": 3, 00:18:10.188 "transport_ack_timeout": 0, 00:18:10.188 "ctrlr_loss_timeout_sec": 0, 00:18:10.188 "reconnect_delay_sec": 0, 00:18:10.188 "fast_io_fail_timeout_sec": 0, 00:18:10.188 "disable_auto_failback": false, 00:18:10.188 "generate_uuids": false, 00:18:10.188 "transport_tos": 0, 00:18:10.188 "nvme_error_stat": false, 00:18:10.188 "rdma_srq_size": 0, 00:18:10.188 "io_path_stat": false, 00:18:10.188 "allow_accel_sequence": false, 00:18:10.188 "rdma_max_cq_size": 0, 00:18:10.188 "rdma_cm_event_timeout_ms": 0, 00:18:10.188 "dhchap_digests": [ 00:18:10.188 "sha256", 00:18:10.188 "sha384", 00:18:10.188 "sha512" 00:18:10.188 ], 00:18:10.188 "dhchap_dhgroups": [ 00:18:10.188 "null", 00:18:10.188 "ffdhe2048", 00:18:10.188 "ffdhe3072", 00:18:10.188 "ffdhe4096", 00:18:10.188 "ffdhe6144", 00:18:10.188 "ffdhe8192" 00:18:10.188 ] 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "bdev_nvme_set_hotplug", 00:18:10.188 "params": { 00:18:10.188 "period_us": 100000, 00:18:10.188 "enable": false 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "bdev_malloc_create", 00:18:10.188 "params": { 00:18:10.188 "name": "malloc0", 00:18:10.188 "num_blocks": 8192, 00:18:10.188 "block_size": 4096, 00:18:10.188 "physical_block_size": 4096, 00:18:10.188 "uuid": "9cb4c4ca-4c9d-4c4a-9de1-8136f539c560", 00:18:10.188 "optimal_io_boundary": 0, 00:18:10.188 "md_size": 0, 00:18:10.188 "dif_type": 0, 00:18:10.188 "dif_is_head_of_md": false, 00:18:10.188 "dif_pi_format": 0 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "bdev_wait_for_examine" 00:18:10.188 } 00:18:10.188 ] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "scsi", 00:18:10.188 "config": null 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "scheduler", 00:18:10.188 "config": [ 00:18:10.188 { 00:18:10.188 "method": "framework_set_scheduler", 00:18:10.188 "params": { 00:18:10.188 "name": "static" 00:18:10.188 } 00:18:10.188 } 00:18:10.188 ] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "vhost_scsi", 00:18:10.188 "config": [] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "vhost_blk", 00:18:10.188 "config": [] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "ublk", 00:18:10.188 "config": [ 00:18:10.188 { 00:18:10.188 "method": "ublk_create_target", 00:18:10.188 "params": { 00:18:10.188 "cpumask": "1" 00:18:10.188 } 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "method": "ublk_start_disk", 00:18:10.188 "params": { 00:18:10.188 "bdev_name": "malloc0", 00:18:10.188 "ublk_id": 0, 00:18:10.188 "num_queues": 1, 00:18:10.188 "queue_depth": 128 00:18:10.188 } 00:18:10.188 } 00:18:10.188 ] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "nbd", 00:18:10.188 "config": [] 00:18:10.188 }, 00:18:10.188 { 00:18:10.188 "subsystem": "nvmf", 00:18:10.188 "config": [ 00:18:10.188 { 00:18:10.188 "method": "nvmf_set_config", 00:18:10.188 "params": { 00:18:10.188 "discovery_filter": "match_any", 00:18:10.188 "admin_cmd_passthru": { 00:18:10.188 "identify_ctrlr": false 00:18:10.188 }, 00:18:10.188 "dhchap_digests": [ 00:18:10.188 "sha256", 00:18:10.188 "sha384", 00:18:10.188 "sha512" 00:18:10.188 ], 00:18:10.188 "dhchap_dhgroups": [ 00:18:10.188 "null", 00:18:10.188 "ffdhe2048", 00:18:10.189 "ffdhe3072", 00:18:10.189 "ffdhe4096", 00:18:10.189 "ffdhe6144", 00:18:10.189 "ffdhe8192" 00:18:10.189 ] 00:18:10.189 } 00:18:10.189 }, 00:18:10.189 { 00:18:10.189 "method": "nvmf_set_max_subsystems", 00:18:10.189 "params": { 00:18:10.189 "max_subsystems": 1024 00:18:10.189 } 00:18:10.189 }, 00:18:10.189 { 00:18:10.189 "method": "nvmf_set_crdt", 00:18:10.189 "params": { 00:18:10.189 "crdt1": 0, 00:18:10.189 "crdt2": 0, 00:18:10.189 "crdt3": 0 00:18:10.189 } 00:18:10.189 } 00:18:10.189 ] 00:18:10.189 }, 00:18:10.189 { 00:18:10.189 "subsystem": "iscsi", 00:18:10.189 "config": [ 00:18:10.189 { 00:18:10.189 "method": "iscsi_set_options", 00:18:10.189 "params": { 00:18:10.189 "node_base": "iqn.2016-06.io.spdk", 00:18:10.189 "max_sessions": 128, 00:18:10.189 "max_connections_per_session": 2, 00:18:10.189 "max_queue_depth": 64, 00:18:10.189 "default_time2wait": 2, 00:18:10.189 "default_time2retain": 20, 00:18:10.189 "first_burst_length": 8192, 00:18:10.189 "immediate_data": true, 00:18:10.189 "allow_duplicated_isid": false, 00:18:10.189 "error_recovery_level": 0, 00:18:10.189 "nop_timeout": 60, 00:18:10.189 "nop_in_interval": 30, 00:18:10.189 "disable_chap": false, 00:18:10.189 "require_chap": false, 00:18:10.189 "mutual_chap": false, 00:18:10.189 "chap_group": 0, 00:18:10.189 "max_large_datain_per_connection": 64, 00:18:10.189 "max_r2t_per_connection": 4, 00:18:10.189 "pdu_pool_size": 36864, 00:18:10.189 "immediate_data_pool_size": 16384, 00:18:10.189 "data_out_pool_size": 2048 00:18:10.189 } 00:18:10.189 } 00:18:10.189 ] 00:18:10.189 } 00:18:10.189 ] 00:18:10.189 }' 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 74944 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 74944 ']' 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 74944 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74944 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.189 killing process with pid 74944 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74944' 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 74944 00:18:10.189 10:54:59 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 74944 00:18:11.562 [2024-11-20 10:55:00.677104] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:11.562 [2024-11-20 10:55:00.714660] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:11.562 [2024-11-20 10:55:00.714777] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:11.562 [2024-11-20 10:55:00.722634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:11.562 [2024-11-20 10:55:00.722688] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:11.562 [2024-11-20 10:55:00.722704] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:11.562 [2024-11-20 10:55:00.722729] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:11.562 [2024-11-20 10:55:00.722869] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75010 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75010 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75010 ']' 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:13.522 10:55:02 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:13.522 "subsystems": [ 00:18:13.522 { 00:18:13.522 "subsystem": "fsdev", 00:18:13.522 "config": [ 00:18:13.522 { 00:18:13.522 "method": "fsdev_set_opts", 00:18:13.522 "params": { 00:18:13.522 "fsdev_io_pool_size": 65535, 00:18:13.522 "fsdev_io_cache_size": 256 00:18:13.522 } 00:18:13.522 } 00:18:13.522 ] 00:18:13.522 }, 00:18:13.522 { 00:18:13.522 "subsystem": "keyring", 00:18:13.522 "config": [] 00:18:13.522 }, 00:18:13.522 { 00:18:13.522 "subsystem": "iobuf", 00:18:13.522 "config": [ 00:18:13.522 { 00:18:13.522 "method": "iobuf_set_options", 00:18:13.522 "params": { 00:18:13.522 "small_pool_count": 8192, 00:18:13.522 "large_pool_count": 1024, 00:18:13.522 "small_bufsize": 8192, 00:18:13.522 "large_bufsize": 135168, 00:18:13.522 "enable_numa": false 00:18:13.522 } 00:18:13.522 } 00:18:13.522 ] 00:18:13.522 }, 00:18:13.522 { 00:18:13.522 "subsystem": "sock", 00:18:13.522 "config": [ 00:18:13.522 { 00:18:13.522 "method": "sock_set_default_impl", 00:18:13.522 "params": { 00:18:13.522 "impl_name": "posix" 00:18:13.522 } 00:18:13.522 }, 00:18:13.522 { 00:18:13.522 "method": "sock_impl_set_options", 00:18:13.522 "params": { 00:18:13.522 "impl_name": "ssl", 00:18:13.522 "recv_buf_size": 4096, 00:18:13.522 "send_buf_size": 4096, 00:18:13.522 "enable_recv_pipe": true, 00:18:13.522 "enable_quickack": false, 00:18:13.522 "enable_placement_id": 0, 00:18:13.522 "enable_zerocopy_send_server": true, 00:18:13.522 "enable_zerocopy_send_client": false, 00:18:13.522 "zerocopy_threshold": 0, 00:18:13.522 "tls_version": 0, 00:18:13.522 "enable_ktls": false 00:18:13.522 } 00:18:13.522 }, 00:18:13.522 { 00:18:13.522 "method": "sock_impl_set_options", 00:18:13.522 "params": { 00:18:13.523 "impl_name": "posix", 00:18:13.523 "recv_buf_size": 2097152, 00:18:13.523 "send_buf_size": 2097152, 00:18:13.523 "enable_recv_pipe": true, 00:18:13.523 "enable_quickack": false, 00:18:13.523 "enable_placement_id": 0, 00:18:13.523 "enable_zerocopy_send_server": true, 00:18:13.523 "enable_zerocopy_send_client": false, 00:18:13.523 "zerocopy_threshold": 0, 00:18:13.523 "tls_version": 0, 00:18:13.523 "enable_ktls": false 00:18:13.523 } 00:18:13.523 } 00:18:13.523 ] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "vmd", 00:18:13.523 "config": [] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "accel", 00:18:13.523 "config": [ 00:18:13.523 { 00:18:13.523 "method": "accel_set_options", 00:18:13.523 "params": { 00:18:13.523 "small_cache_size": 128, 00:18:13.523 "large_cache_size": 16, 00:18:13.523 "task_count": 2048, 00:18:13.523 "sequence_count": 2048, 00:18:13.523 "buf_count": 2048 00:18:13.523 } 00:18:13.523 } 00:18:13.523 ] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "bdev", 00:18:13.523 "config": [ 00:18:13.523 { 00:18:13.523 "method": "bdev_set_options", 00:18:13.523 "params": { 00:18:13.523 "bdev_io_pool_size": 65535, 00:18:13.523 "bdev_io_cache_size": 256, 00:18:13.523 "bdev_auto_examine": true, 00:18:13.523 "iobuf_small_cache_size": 128, 00:18:13.523 "iobuf_large_cache_size": 16 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "bdev_raid_set_options", 00:18:13.523 "params": { 00:18:13.523 "process_window_size_kb": 1024, 00:18:13.523 "process_max_bandwidth_mb_sec": 0 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "bdev_iscsi_set_options", 00:18:13.523 "params": { 00:18:13.523 "timeout_sec": 30 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "bdev_nvme_set_options", 00:18:13.523 "params": { 00:18:13.523 "action_on_timeout": "none", 00:18:13.523 "timeout_us": 0, 00:18:13.523 "timeout_admin_us": 0, 00:18:13.523 "keep_alive_timeout_ms": 10000, 00:18:13.523 "arbitration_burst": 0, 00:18:13.523 "low_priority_weight": 0, 00:18:13.523 "medium_priority_weight": 0, 00:18:13.523 "high_priority_weight": 0, 00:18:13.523 "nvme_adminq_poll_period_us": 10000, 00:18:13.523 "nvme_ioq_poll_period_us": 0, 00:18:13.523 "io_queue_requests": 0, 00:18:13.523 "delay_cmd_submit": true, 00:18:13.523 "transport_retry_count": 4, 00:18:13.523 "bdev_retry_count": 3, 00:18:13.523 "transport_ack_timeout": 0, 00:18:13.523 "ctrlr_loss_timeout_sec": 0, 00:18:13.523 "reconnect_delay_sec": 0, 00:18:13.523 "fast_io_fail_timeout_sec": 0, 00:18:13.523 "disable_auto_failback": false, 00:18:13.523 "generate_uuids": false, 00:18:13.523 "transport_tos": 0, 00:18:13.523 "nvme_error_stat": false, 00:18:13.523 "rdma_srq_size": 0, 00:18:13.523 "io_path_stat": false, 00:18:13.523 "allow_accel_sequence": false, 00:18:13.523 "rdma_max_cq_size": 0, 00:18:13.523 "rdma_cm_event_timeout_ms": 0, 00:18:13.523 "dhchap_digests": [ 00:18:13.523 "sha256", 00:18:13.523 "sha384", 00:18:13.523 "sha512" 00:18:13.523 ], 00:18:13.523 "dhchap_dhgroups": [ 00:18:13.523 "null", 00:18:13.523 "ffdhe2048", 00:18:13.523 "ffdhe3072", 00:18:13.523 "ffdhe4096", 00:18:13.523 "ffdhe6144", 00:18:13.523 "ffdhe8192" 00:18:13.523 ] 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "bdev_nvme_set_hotplug", 00:18:13.523 "params": { 00:18:13.523 "period_us": 100000, 00:18:13.523 "enable": false 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "bdev_malloc_create", 00:18:13.523 "params": { 00:18:13.523 "name": "malloc0", 00:18:13.523 "num_blocks": 8192, 00:18:13.523 "block_size": 4096, 00:18:13.523 "physical_block_size": 4096, 00:18:13.523 "uuid": "9cb4c4ca-4c9d-4c4a-9de1-8136f539c560", 00:18:13.523 "optimal_io_boundary": 0, 00:18:13.523 "md_size": 0, 00:18:13.523 "dif_type": 0, 00:18:13.523 "dif_is_head_of_md": false, 00:18:13.523 "dif_pi_format": 0 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "bdev_wait_for_examine" 00:18:13.523 } 00:18:13.523 ] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "scsi", 00:18:13.523 "config": null 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "scheduler", 00:18:13.523 "config": [ 00:18:13.523 { 00:18:13.523 "method": "framework_set_scheduler", 00:18:13.523 "params": { 00:18:13.523 "name": "static" 00:18:13.523 } 00:18:13.523 } 00:18:13.523 ] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "vhost_scsi", 00:18:13.523 "config": [] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "vhost_blk", 00:18:13.523 "config": [] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "ublk", 00:18:13.523 "config": [ 00:18:13.523 { 00:18:13.523 "method": "ublk_create_target", 00:18:13.523 "params": { 00:18:13.523 "cpumask": "1" 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "ublk_start_disk", 00:18:13.523 "params": { 00:18:13.523 "bdev_name": "malloc0", 00:18:13.523 "ublk_id": 0, 00:18:13.523 "num_queues": 1, 00:18:13.523 "queue_depth": 128 00:18:13.523 } 00:18:13.523 } 00:18:13.523 ] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "nbd", 00:18:13.523 "config": [] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "nvmf", 00:18:13.523 "config": [ 00:18:13.523 { 00:18:13.523 "method": "nvmf_set_config", 00:18:13.523 "params": { 00:18:13.523 "discovery_filter": "match_any", 00:18:13.523 "admin_cmd_passthru": { 00:18:13.523 "identify_ctrlr": false 00:18:13.523 }, 00:18:13.523 "dhchap_digests": [ 00:18:13.523 "sha256", 00:18:13.523 "sha384", 00:18:13.523 "sha512" 00:18:13.523 ], 00:18:13.523 "dhchap_dhgroups": [ 00:18:13.523 "null", 00:18:13.523 "ffdhe2048", 00:18:13.523 "ffdhe3072", 00:18:13.523 "ffdhe4096", 00:18:13.523 "ffdhe6144", 00:18:13.523 "ffdhe8192" 00:18:13.523 ] 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "nvmf_set_max_subsystems", 00:18:13.523 "params": { 00:18:13.523 "max_subsystems": 1024 00:18:13.523 } 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "method": "nvmf_set_crdt", 00:18:13.523 "params": { 00:18:13.523 "crdt1": 0, 00:18:13.523 "crdt2": 0, 00:18:13.523 "crdt3": 0 00:18:13.523 } 00:18:13.523 } 00:18:13.523 ] 00:18:13.523 }, 00:18:13.523 { 00:18:13.523 "subsystem": "iscsi", 00:18:13.523 "config": [ 00:18:13.523 { 00:18:13.523 "method": "iscsi_set_options", 00:18:13.523 "params": { 00:18:13.523 "node_base": "iqn.2016-06.io.spdk", 00:18:13.523 "max_sessions": 128, 00:18:13.523 "max_connections_per_session": 2, 00:18:13.523 "max_queue_depth": 64, 00:18:13.523 "default_time2wait": 2, 00:18:13.523 "default_time2retain": 20, 00:18:13.523 "first_burst_length": 8192, 00:18:13.523 "immediate_data": true, 00:18:13.523 "allow_duplicated_isid": false, 00:18:13.523 "error_recovery_level": 0, 00:18:13.523 "nop_timeout": 60, 00:18:13.523 "nop_in_interval": 30, 00:18:13.523 "disable_chap": false, 00:18:13.523 "require_chap": false, 00:18:13.523 "mutual_chap": false, 00:18:13.523 "chap_group": 0, 00:18:13.523 "max_large_datain_per_connection": 64, 00:18:13.523 "max_r2t_per_connection": 4, 00:18:13.523 "pdu_pool_size": 36864, 00:18:13.523 "immediate_data_pool_size": 16384, 00:18:13.523 "data_out_pool_size": 2048 00:18:13.523 } 00:18:13.523 } 00:18:13.523 ] 00:18:13.523 } 00:18:13.523 ] 00:18:13.523 }' 00:18:13.523 [2024-11-20 10:55:02.607549] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:13.523 [2024-11-20 10:55:02.607686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75010 ] 00:18:13.781 [2024-11-20 10:55:02.787214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.781 [2024-11-20 10:55:02.898904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.714 [2024-11-20 10:55:03.893616] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:14.714 [2024-11-20 10:55:03.894758] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:14.714 [2024-11-20 10:55:03.901741] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:14.714 [2024-11-20 10:55:03.901829] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:14.714 [2024-11-20 10:55:03.901843] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:14.714 [2024-11-20 10:55:03.901851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:14.714 [2024-11-20 10:55:03.909753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:14.714 [2024-11-20 10:55:03.909775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:14.714 [2024-11-20 10:55:03.917633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:14.714 [2024-11-20 10:55:03.917735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:14.714 [2024-11-20 10:55:03.934620] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:14.972 10:55:03 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.972 10:55:03 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:14.972 10:55:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:14.972 10:55:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:14.972 10:55:03 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.972 10:55:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75010 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75010 ']' 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75010 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75010 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.972 killing process with pid 75010 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75010' 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75010 00:18:14.972 10:55:04 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75010 00:18:16.347 [2024-11-20 10:55:05.558779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:16.347 [2024-11-20 10:55:05.589691] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:16.347 [2024-11-20 10:55:05.589803] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:16.347 [2024-11-20 10:55:05.597629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:16.347 [2024-11-20 10:55:05.597680] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:16.347 [2024-11-20 10:55:05.597689] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:16.347 [2024-11-20 10:55:05.597715] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:16.347 [2024-11-20 10:55:05.597859] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:18.882 10:55:07 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:18.882 00:18:18.882 real 0m10.096s 00:18:18.882 user 0m7.496s 00:18:18.882 sys 0m3.312s 00:18:18.882 10:55:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.882 10:55:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:18.882 ************************************ 00:18:18.882 END TEST test_save_ublk_config 00:18:18.882 ************************************ 00:18:18.882 10:55:07 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75101 00:18:18.882 10:55:07 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:18.882 10:55:07 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.882 10:55:07 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75101 00:18:18.882 10:55:07 ublk -- common/autotest_common.sh@835 -- # '[' -z 75101 ']' 00:18:18.882 10:55:07 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.882 10:55:07 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.882 10:55:07 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.882 10:55:07 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.882 10:55:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:18.882 [2024-11-20 10:55:07.770560] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:18.882 [2024-11-20 10:55:07.770688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75101 ] 00:18:18.882 [2024-11-20 10:55:07.949975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:18.882 [2024-11-20 10:55:08.056894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.882 [2024-11-20 10:55:08.056929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.817 10:55:08 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.817 10:55:08 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:19.817 10:55:08 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:19.817 10:55:08 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:19.817 10:55:08 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.817 10:55:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.817 ************************************ 00:18:19.817 START TEST test_create_ublk 00:18:19.817 ************************************ 00:18:19.817 10:55:08 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:19.817 10:55:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:19.817 10:55:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.817 10:55:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:19.817 [2024-11-20 10:55:08.921616] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:19.817 [2024-11-20 10:55:08.923961] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:19.817 10:55:08 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.817 10:55:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:19.817 10:55:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:19.817 10:55:08 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.817 10:55:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:20.079 10:55:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.079 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:20.079 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:20.079 10:55:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.079 10:55:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:20.079 [2024-11-20 10:55:09.213792] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:20.079 [2024-11-20 10:55:09.214249] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:20.079 [2024-11-20 10:55:09.214270] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:20.079 [2024-11-20 10:55:09.214278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:20.079 [2024-11-20 10:55:09.221962] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:20.079 [2024-11-20 10:55:09.221983] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:20.079 [2024-11-20 10:55:09.229625] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:20.079 [2024-11-20 10:55:09.239667] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:20.079 [2024-11-20 10:55:09.250704] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:20.079 10:55:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.079 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:20.079 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:20.079 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:20.079 10:55:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.079 10:55:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:20.079 10:55:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.079 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:20.079 { 00:18:20.079 "ublk_device": "/dev/ublkb0", 00:18:20.079 "id": 0, 00:18:20.079 "queue_depth": 512, 00:18:20.079 "num_queues": 4, 00:18:20.079 "bdev_name": "Malloc0" 00:18:20.079 } 00:18:20.079 ]' 00:18:20.079 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:20.079 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:20.349 10:55:09 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:20.349 10:55:09 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:20.608 fio: verification read phase will never start because write phase uses all of runtime 00:18:20.608 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:20.608 fio-3.35 00:18:20.608 Starting 1 process 00:18:30.578 00:18:30.578 fio_test: (groupid=0, jobs=1): err= 0: pid=75152: Wed Nov 20 10:55:19 2024 00:18:30.578 write: IOPS=16.8k, BW=65.4MiB/s (68.6MB/s)(654MiB/10001msec); 0 zone resets 00:18:30.578 clat (usec): min=37, max=4043, avg=58.89, stdev=96.85 00:18:30.578 lat (usec): min=37, max=4043, avg=59.34, stdev=96.86 00:18:30.578 clat percentiles (usec): 00:18:30.578 | 1.00th=[ 39], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:18:30.578 | 30.00th=[ 53], 40.00th=[ 55], 50.00th=[ 55], 60.00th=[ 56], 00:18:30.578 | 70.00th=[ 57], 80.00th=[ 58], 90.00th=[ 60], 95.00th=[ 63], 00:18:30.578 | 99.00th=[ 74], 99.50th=[ 83], 99.90th=[ 1958], 99.95th=[ 2769], 00:18:30.578 | 99.99th=[ 3556] 00:18:30.578 bw ( KiB/s): min=65912, max=75400, per=100.00%, avg=67142.32, stdev=2031.82, samples=19 00:18:30.579 iops : min=16478, max=18850, avg=16785.58, stdev=507.96, samples=19 00:18:30.579 lat (usec) : 50=4.48%, 100=95.26%, 250=0.05%, 500=0.02%, 750=0.01% 00:18:30.579 lat (usec) : 1000=0.01% 00:18:30.579 lat (msec) : 2=0.07%, 4=0.10%, 10=0.01% 00:18:30.579 cpu : usr=3.33%, sys=9.62%, ctx=167542, majf=0, minf=794 00:18:30.579 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.579 issued rwts: total=0,167541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.579 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:30.579 00:18:30.579 Run status group 0 (all jobs): 00:18:30.579 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=654MiB (686MB), run=10001-10001msec 00:18:30.579 00:18:30.579 Disk stats (read/write): 00:18:30.579 ublkb0: ios=0/165884, merge=0/0, ticks=0/8686, in_queue=8687, util=99.11% 00:18:30.579 10:55:19 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:30.579 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.579 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.579 [2024-11-20 10:55:19.763062] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:30.579 [2024-11-20 10:55:19.813058] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:30.579 [2024-11-20 10:55:19.813951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:30.579 [2024-11-20 10:55:19.821647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:30.579 [2024-11-20 10:55:19.821908] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:30.579 [2024-11-20 10:55:19.821922] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.837 10:55:19 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.837 [2024-11-20 10:55:19.845708] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:30.837 request: 00:18:30.837 { 00:18:30.837 "ublk_id": 0, 00:18:30.837 "method": "ublk_stop_disk", 00:18:30.837 "req_id": 1 00:18:30.837 } 00:18:30.837 Got JSON-RPC error response 00:18:30.837 response: 00:18:30.837 { 00:18:30.837 "code": -19, 00:18:30.837 "message": "No such device" 00:18:30.837 } 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.837 10:55:19 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.837 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:30.837 [2024-11-20 10:55:19.869721] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:30.838 [2024-11-20 10:55:19.877620] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:30.838 [2024-11-20 10:55:19.877666] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:30.838 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.838 10:55:19 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:30.838 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.838 10:55:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.404 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.404 10:55:20 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:31.404 10:55:20 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:31.404 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.404 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.404 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.404 10:55:20 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:31.404 10:55:20 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:31.404 10:55:20 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:31.404 10:55:20 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:31.404 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.404 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.404 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.404 10:55:20 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:31.404 10:55:20 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:31.663 10:55:20 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:31.663 00:18:31.663 real 0m11.787s 00:18:31.663 user 0m0.721s 00:18:31.663 sys 0m1.108s 00:18:31.663 ************************************ 00:18:31.663 END TEST test_create_ublk 00:18:31.663 ************************************ 00:18:31.663 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.663 10:55:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.663 10:55:20 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:31.663 10:55:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.663 10:55:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.663 10:55:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.663 ************************************ 00:18:31.663 START TEST test_create_multi_ublk 00:18:31.663 ************************************ 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.663 [2024-11-20 10:55:20.781620] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:31.663 [2024-11-20 10:55:20.784200] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.663 10:55:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:31.922 [2024-11-20 10:55:21.060760] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:31.922 [2024-11-20 10:55:21.061193] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:31.922 [2024-11-20 10:55:21.061205] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:31.922 [2024-11-20 10:55:21.061218] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:31.922 [2024-11-20 10:55:21.068949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:31.922 [2024-11-20 10:55:21.068979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:31.922 [2024-11-20 10:55:21.076626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:31.922 [2024-11-20 10:55:21.077186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:31.922 [2024-11-20 10:55:21.090693] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.922 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.181 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.181 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:32.181 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:32.181 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.181 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.181 [2024-11-20 10:55:21.381751] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:32.181 [2024-11-20 10:55:21.382215] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:32.181 [2024-11-20 10:55:21.382235] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:32.181 [2024-11-20 10:55:21.382243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:32.181 [2024-11-20 10:55:21.389657] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:32.181 [2024-11-20 10:55:21.389680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:32.181 [2024-11-20 10:55:21.397643] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:32.181 [2024-11-20 10:55:21.398210] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:32.181 [2024-11-20 10:55:21.406650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:32.182 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.182 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:32.182 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:32.182 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:32.182 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.182 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.441 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.441 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:32.441 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:32.441 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.441 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.441 [2024-11-20 10:55:21.687784] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:32.441 [2024-11-20 10:55:21.688238] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:32.441 [2024-11-20 10:55:21.688255] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:32.441 [2024-11-20 10:55:21.688266] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:32.700 [2024-11-20 10:55:21.696026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:32.700 [2024-11-20 10:55:21.696049] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:32.700 [2024-11-20 10:55:21.703631] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:32.700 [2024-11-20 10:55:21.704209] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:32.700 [2024-11-20 10:55:21.710673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:32.700 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.700 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:32.700 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:32.700 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:32.700 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.700 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.959 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.959 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:32.959 10:55:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:32.959 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.959 10:55:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.959 [2024-11-20 10:55:21.994768] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:32.959 [2024-11-20 10:55:21.995199] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:32.959 [2024-11-20 10:55:21.995218] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:32.959 [2024-11-20 10:55:21.995226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:32.959 [2024-11-20 10:55:22.006644] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:32.959 [2024-11-20 10:55:22.006670] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:32.959 [2024-11-20 10:55:22.014648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:32.959 [2024-11-20 10:55:22.015298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:32.959 [2024-11-20 10:55:22.027626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:32.959 { 00:18:32.959 "ublk_device": "/dev/ublkb0", 00:18:32.959 "id": 0, 00:18:32.959 "queue_depth": 512, 00:18:32.959 "num_queues": 4, 00:18:32.959 "bdev_name": "Malloc0" 00:18:32.959 }, 00:18:32.959 { 00:18:32.959 "ublk_device": "/dev/ublkb1", 00:18:32.959 "id": 1, 00:18:32.959 "queue_depth": 512, 00:18:32.959 "num_queues": 4, 00:18:32.959 "bdev_name": "Malloc1" 00:18:32.959 }, 00:18:32.959 { 00:18:32.959 "ublk_device": "/dev/ublkb2", 00:18:32.959 "id": 2, 00:18:32.959 "queue_depth": 512, 00:18:32.959 "num_queues": 4, 00:18:32.959 "bdev_name": "Malloc2" 00:18:32.959 }, 00:18:32.959 { 00:18:32.959 "ublk_device": "/dev/ublkb3", 00:18:32.959 "id": 3, 00:18:32.959 "queue_depth": 512, 00:18:32.959 "num_queues": 4, 00:18:32.959 "bdev_name": "Malloc3" 00:18:32.959 } 00:18:32.959 ]' 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:32.959 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:33.218 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:33.219 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:33.219 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:33.219 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:33.219 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:33.219 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:33.219 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:33.219 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:33.478 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:33.738 [2024-11-20 10:55:22.893735] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:33.738 [2024-11-20 10:55:22.926040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:33.738 [2024-11-20 10:55:22.927180] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:33.738 [2024-11-20 10:55:22.933638] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:33.738 [2024-11-20 10:55:22.933911] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:33.738 [2024-11-20 10:55:22.933925] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.738 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:33.738 [2024-11-20 10:55:22.949688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:33.738 [2024-11-20 10:55:22.987636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:33.738 [2024-11-20 10:55:22.988487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:33.997 [2024-11-20 10:55:22.996732] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:33.997 [2024-11-20 10:55:22.997019] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:33.997 [2024-11-20 10:55:22.997034] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:33.997 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.997 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:33.997 10:55:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:33.997 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.997 10:55:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:33.997 [2024-11-20 10:55:23.004733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:33.997 [2024-11-20 10:55:23.044681] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:33.997 [2024-11-20 10:55:23.045466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:33.997 [2024-11-20 10:55:23.052743] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:33.997 [2024-11-20 10:55:23.053006] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:33.997 [2024-11-20 10:55:23.053018] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:33.997 10:55:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.997 10:55:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:33.997 10:55:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:33.997 10:55:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.997 10:55:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:33.997 [2024-11-20 10:55:23.068704] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:33.997 [2024-11-20 10:55:23.105661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:33.997 [2024-11-20 10:55:23.106370] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:33.997 [2024-11-20 10:55:23.112624] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:33.997 [2024-11-20 10:55:23.112910] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:33.997 [2024-11-20 10:55:23.112924] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:33.997 10:55:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.997 10:55:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:34.257 [2024-11-20 10:55:23.304703] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:34.257 [2024-11-20 10:55:23.312635] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:34.257 [2024-11-20 10:55:23.312671] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:34.257 10:55:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:34.257 10:55:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:34.257 10:55:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:34.257 10:55:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.257 10:55:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:34.826 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.826 10:55:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:34.826 10:55:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:34.826 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.826 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.444 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.444 10:55:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:35.444 10:55:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:35.444 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.444 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.702 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.702 10:55:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:35.702 10:55:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:35.702 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.702 10:55:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:35.961 10:55:25 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:36.220 10:55:25 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:36.220 00:18:36.220 real 0m4.458s 00:18:36.220 user 0m0.998s 00:18:36.220 sys 0m0.226s 00:18:36.220 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.220 10:55:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.220 ************************************ 00:18:36.220 END TEST test_create_multi_ublk 00:18:36.220 ************************************ 00:18:36.220 10:55:25 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:36.220 10:55:25 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:36.220 10:55:25 ublk -- ublk/ublk.sh@130 -- # killprocess 75101 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@954 -- # '[' -z 75101 ']' 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@958 -- # kill -0 75101 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@959 -- # uname 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75101 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.220 killing process with pid 75101 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75101' 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@973 -- # kill 75101 00:18:36.220 10:55:25 ublk -- common/autotest_common.sh@978 -- # wait 75101 00:18:37.184 [2024-11-20 10:55:26.425579] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:37.184 [2024-11-20 10:55:26.425645] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:38.562 00:18:38.562 real 0m30.374s 00:18:38.562 user 0m43.182s 00:18:38.562 sys 0m10.536s 00:18:38.562 10:55:27 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.562 ************************************ 00:18:38.562 END TEST ublk 00:18:38.562 ************************************ 00:18:38.562 10:55:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.562 10:55:27 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:38.562 10:55:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:38.562 10:55:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.562 10:55:27 -- common/autotest_common.sh@10 -- # set +x 00:18:38.562 ************************************ 00:18:38.562 START TEST ublk_recovery 00:18:38.562 ************************************ 00:18:38.562 10:55:27 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:38.562 * Looking for test storage... 00:18:38.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.821 10:55:27 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:38.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.821 --rc genhtml_branch_coverage=1 00:18:38.821 --rc genhtml_function_coverage=1 00:18:38.821 --rc genhtml_legend=1 00:18:38.821 --rc geninfo_all_blocks=1 00:18:38.821 --rc geninfo_unexecuted_blocks=1 00:18:38.821 00:18:38.821 ' 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:38.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.821 --rc genhtml_branch_coverage=1 00:18:38.821 --rc genhtml_function_coverage=1 00:18:38.821 --rc genhtml_legend=1 00:18:38.821 --rc geninfo_all_blocks=1 00:18:38.821 --rc geninfo_unexecuted_blocks=1 00:18:38.821 00:18:38.821 ' 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:38.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.821 --rc genhtml_branch_coverage=1 00:18:38.821 --rc genhtml_function_coverage=1 00:18:38.821 --rc genhtml_legend=1 00:18:38.821 --rc geninfo_all_blocks=1 00:18:38.821 --rc geninfo_unexecuted_blocks=1 00:18:38.821 00:18:38.821 ' 00:18:38.821 10:55:27 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:38.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.821 --rc genhtml_branch_coverage=1 00:18:38.821 --rc genhtml_function_coverage=1 00:18:38.821 --rc genhtml_legend=1 00:18:38.821 --rc geninfo_all_blocks=1 00:18:38.821 --rc geninfo_unexecuted_blocks=1 00:18:38.821 00:18:38.821 ' 00:18:38.821 10:55:27 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:38.821 10:55:27 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:38.821 10:55:27 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:38.821 10:55:27 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:38.821 10:55:27 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:38.821 10:55:27 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:38.821 10:55:27 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:38.821 10:55:27 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:38.821 10:55:27 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:38.821 10:55:27 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:38.821 10:55:27 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75528 00:18:38.822 10:55:27 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.822 10:55:27 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75528 00:18:38.822 10:55:27 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:38.822 10:55:27 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75528 ']' 00:18:38.822 10:55:27 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.822 10:55:27 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.822 10:55:27 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.822 10:55:27 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.822 10:55:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.822 [2024-11-20 10:55:28.016249] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:38.822 [2024-11-20 10:55:28.016382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75528 ] 00:18:39.081 [2024-11-20 10:55:28.196439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:39.081 [2024-11-20 10:55:28.303785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.081 [2024-11-20 10:55:28.303823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.016 10:55:29 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.016 10:55:29 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:40.016 10:55:29 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:40.016 10:55:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.016 10:55:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.017 [2024-11-20 10:55:29.156644] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:40.017 [2024-11-20 10:55:29.159300] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:40.017 10:55:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.017 10:55:29 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:40.017 10:55:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.017 10:55:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.275 malloc0 00:18:40.275 10:55:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.275 10:55:29 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:40.275 10:55:29 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.275 10:55:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.275 [2024-11-20 10:55:29.306766] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:40.275 [2024-11-20 10:55:29.306896] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:40.275 [2024-11-20 10:55:29.306911] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:40.275 [2024-11-20 10:55:29.306922] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:40.275 [2024-11-20 10:55:29.314793] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:40.275 [2024-11-20 10:55:29.314818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:40.275 [2024-11-20 10:55:29.322623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:40.275 [2024-11-20 10:55:29.322769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:40.275 [2024-11-20 10:55:29.337627] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:40.275 1 00:18:40.275 10:55:29 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.275 10:55:29 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:41.211 10:55:30 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75564 00:18:41.211 10:55:30 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:41.211 10:55:30 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:41.468 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:41.468 fio-3.35 00:18:41.468 Starting 1 process 00:18:46.733 10:55:35 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75528 00:18:46.733 10:55:35 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:52.002 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75528 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:52.002 10:55:40 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75678 00:18:52.002 10:55:40 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:52.002 10:55:40 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.002 10:55:40 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75678 00:18:52.002 10:55:40 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75678 ']' 00:18:52.002 10:55:40 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.002 10:55:40 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.002 10:55:40 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.002 10:55:40 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.002 10:55:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.002 [2024-11-20 10:55:40.466197] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:18:52.002 [2024-11-20 10:55:40.466312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75678 ] 00:18:52.002 [2024-11-20 10:55:40.649832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:52.002 [2024-11-20 10:55:40.759915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.002 [2024-11-20 10:55:40.759950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:52.571 10:55:41 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.571 [2024-11-20 10:55:41.614614] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:52.571 [2024-11-20 10:55:41.616918] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.571 10:55:41 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.571 malloc0 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.571 10:55:41 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:52.571 [2024-11-20 10:55:41.760765] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:52.571 [2024-11-20 10:55:41.760805] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:52.571 [2024-11-20 10:55:41.760833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:52.571 [2024-11-20 10:55:41.768654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:52.571 [2024-11-20 10:55:41.768682] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:52.571 1 00:18:52.571 10:55:41 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.571 10:55:41 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75564 00:18:53.949 [2024-11-20 10:55:42.767092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:53.949 [2024-11-20 10:55:42.773638] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:53.949 [2024-11-20 10:55:42.773660] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:54.887 [2024-11-20 10:55:43.772074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:54.887 [2024-11-20 10:55:43.778610] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:54.887 [2024-11-20 10:55:43.778636] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:55.825 [2024-11-20 10:55:44.777041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:55.825 [2024-11-20 10:55:44.787625] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:55.825 [2024-11-20 10:55:44.787646] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:18:55.825 [2024-11-20 10:55:44.787659] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:55.825 [2024-11-20 10:55:44.787752] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:19:17.797 [2024-11-20 10:56:05.773628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:19:17.797 [2024-11-20 10:56:05.780240] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:19:17.797 [2024-11-20 10:56:05.787799] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:19:17.797 [2024-11-20 10:56:05.787826] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:44.341 00:19:44.341 fio_test: (groupid=0, jobs=1): err= 0: pid=75567: Wed Nov 20 10:56:30 2024 00:19:44.341 read: IOPS=12.7k, BW=49.5MiB/s (52.0MB/s)(2973MiB/60002msec) 00:19:44.341 slat (nsec): min=1954, max=323517, avg=6859.68, stdev=1996.75 00:19:44.341 clat (usec): min=1394, max=30442k, avg=4521.63, stdev=251589.86 00:19:44.341 lat (usec): min=1401, max=30442k, avg=4528.49, stdev=251589.86 00:19:44.341 clat percentiles (usec): 00:19:44.341 | 1.00th=[ 1909], 5.00th=[ 2089], 10.00th=[ 2147], 20.00th=[ 2212], 00:19:44.341 | 30.00th=[ 2245], 40.00th=[ 2245], 50.00th=[ 2278], 60.00th=[ 2311], 00:19:44.341 | 70.00th=[ 2343], 80.00th=[ 2376], 90.00th=[ 2966], 95.00th=[ 3687], 00:19:44.341 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7373], 99.95th=[ 8291], 00:19:44.341 | 99.99th=[13435] 00:19:44.341 bw ( KiB/s): min=36296, max=106840, per=100.00%, avg=101646.47, stdev=12982.77, samples=59 00:19:44.341 iops : min= 9074, max=26710, avg=25411.59, stdev=3245.69, samples=59 00:19:44.341 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(2967MiB/60002msec); 0 zone resets 00:19:44.341 slat (usec): min=2, max=271, avg= 6.85, stdev= 1.99 00:19:44.341 clat (usec): min=1351, max=30441k, avg=5565.25, stdev=304434.91 00:19:44.341 lat (usec): min=1357, max=30441k, avg=5572.10, stdev=304434.91 00:19:44.341 clat percentiles (usec): 00:19:44.341 | 1.00th=[ 1909], 5.00th=[ 2057], 10.00th=[ 2212], 00:19:44.341 | 20.00th=[ 2278], 30.00th=[ 2343], 40.00th=[ 2343], 00:19:44.341 | 50.00th=[ 2376], 60.00th=[ 2409], 70.00th=[ 2442], 00:19:44.341 | 80.00th=[ 2474], 90.00th=[ 2999], 95.00th=[ 3687], 00:19:44.341 | 99.00th=[ 5276], 99.50th=[ 5932], 99.90th=[ 7570], 00:19:44.341 | 99.95th=[ 8455], 99.99th=[17112761] 00:19:44.341 bw ( KiB/s): min=37280, max=107464, per=100.00%, avg=101449.58, stdev=12793.26, samples=59 00:19:44.341 iops : min= 9320, max=26866, avg=25362.36, stdev=3198.36, samples=59 00:19:44.341 lat (msec) : 2=2.62%, 4=93.75%, 10=3.62%, 20=0.01%, >=2000=0.01% 00:19:44.341 cpu : usr=6.45%, sys=17.49%, ctx=64799, majf=0, minf=13 00:19:44.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:44.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:44.341 issued rwts: total=761039,759640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:44.341 00:19:44.341 Run status group 0 (all jobs): 00:19:44.341 READ: bw=49.5MiB/s (52.0MB/s), 49.5MiB/s-49.5MiB/s (52.0MB/s-52.0MB/s), io=2973MiB (3117MB), run=60002-60002msec 00:19:44.341 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=2967MiB (3111MB), run=60002-60002msec 00:19:44.341 00:19:44.341 Disk stats (read/write): 00:19:44.341 ublkb1: ios=758076/756675, merge=0/0, ticks=3374879/4087828, in_queue=7462708, util=99.94% 00:19:44.341 10:56:30 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.341 [2024-11-20 10:56:30.631085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:44.341 [2024-11-20 10:56:30.673667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:44.341 [2024-11-20 10:56:30.673846] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:44.341 [2024-11-20 10:56:30.681628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:44.341 [2024-11-20 10:56:30.681740] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:44.341 [2024-11-20 10:56:30.681750] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.341 10:56:30 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.341 [2024-11-20 10:56:30.697712] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:44.341 [2024-11-20 10:56:30.705620] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:44.341 [2024-11-20 10:56:30.705662] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.341 10:56:30 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:44.341 10:56:30 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:44.341 10:56:30 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75678 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75678 ']' 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75678 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75678 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75678' 00:19:44.341 killing process with pid 75678 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75678 00:19:44.341 10:56:30 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75678 00:19:44.341 [2024-11-20 10:56:32.326547] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:44.341 [2024-11-20 10:56:32.326613] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:44.599 00:19:44.599 real 1m5.992s 00:19:44.599 user 1m51.274s 00:19:44.599 sys 0m24.419s 00:19:44.599 10:56:33 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.599 10:56:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 ************************************ 00:19:44.599 END TEST ublk_recovery 00:19:44.599 ************************************ 00:19:44.599 10:56:33 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:44.599 10:56:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:44.599 10:56:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.599 10:56:33 -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 10:56:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:44.599 10:56:33 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:44.599 10:56:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:44.599 10:56:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.599 10:56:33 -- common/autotest_common.sh@10 -- # set +x 00:19:44.599 ************************************ 00:19:44.599 START TEST ftl 00:19:44.599 ************************************ 00:19:44.599 10:56:33 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:44.857 * Looking for test storage... 00:19:44.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.857 10:56:33 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:44.857 10:56:33 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:19:44.857 10:56:33 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:44.857 10:56:34 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:44.857 10:56:34 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.857 10:56:34 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.857 10:56:34 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.858 10:56:34 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.858 10:56:34 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.858 10:56:34 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.858 10:56:34 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.858 10:56:34 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.858 10:56:34 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.858 10:56:34 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.858 10:56:34 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.858 10:56:34 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:44.858 10:56:34 ftl -- scripts/common.sh@345 -- # : 1 00:19:44.858 10:56:34 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.858 10:56:34 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.858 10:56:34 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:44.858 10:56:34 ftl -- scripts/common.sh@353 -- # local d=1 00:19:44.858 10:56:34 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.858 10:56:34 ftl -- scripts/common.sh@355 -- # echo 1 00:19:44.858 10:56:34 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.858 10:56:34 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:44.858 10:56:34 ftl -- scripts/common.sh@353 -- # local d=2 00:19:44.858 10:56:34 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.858 10:56:34 ftl -- scripts/common.sh@355 -- # echo 2 00:19:44.858 10:56:34 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.858 10:56:34 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.858 10:56:34 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.858 10:56:34 ftl -- scripts/common.sh@368 -- # return 0 00:19:44.858 10:56:34 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.858 10:56:34 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:44.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.858 --rc genhtml_branch_coverage=1 00:19:44.858 --rc genhtml_function_coverage=1 00:19:44.858 --rc genhtml_legend=1 00:19:44.858 --rc geninfo_all_blocks=1 00:19:44.858 --rc geninfo_unexecuted_blocks=1 00:19:44.858 00:19:44.858 ' 00:19:44.858 10:56:34 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:44.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.858 --rc genhtml_branch_coverage=1 00:19:44.858 --rc genhtml_function_coverage=1 00:19:44.858 --rc genhtml_legend=1 00:19:44.858 --rc geninfo_all_blocks=1 00:19:44.858 --rc geninfo_unexecuted_blocks=1 00:19:44.858 00:19:44.858 ' 00:19:44.858 10:56:34 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:44.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.858 --rc genhtml_branch_coverage=1 00:19:44.858 --rc genhtml_function_coverage=1 00:19:44.858 --rc genhtml_legend=1 00:19:44.858 --rc geninfo_all_blocks=1 00:19:44.858 --rc geninfo_unexecuted_blocks=1 00:19:44.858 00:19:44.858 ' 00:19:44.858 10:56:34 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:44.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.858 --rc genhtml_branch_coverage=1 00:19:44.858 --rc genhtml_function_coverage=1 00:19:44.858 --rc genhtml_legend=1 00:19:44.858 --rc geninfo_all_blocks=1 00:19:44.858 --rc geninfo_unexecuted_blocks=1 00:19:44.858 00:19:44.858 ' 00:19:44.858 10:56:34 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:44.858 10:56:34 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:44.858 10:56:34 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.858 10:56:34 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:44.858 10:56:34 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:44.858 10:56:34 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:44.858 10:56:34 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.858 10:56:34 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:44.858 10:56:34 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:44.858 10:56:34 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.858 10:56:34 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.858 10:56:34 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:44.858 10:56:34 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:44.858 10:56:34 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.858 10:56:34 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:44.858 10:56:34 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:44.858 10:56:34 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:44.858 10:56:34 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.858 10:56:34 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:44.858 10:56:34 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:44.858 10:56:34 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:44.858 10:56:34 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.858 10:56:34 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:44.858 10:56:34 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.858 10:56:34 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:44.858 10:56:34 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:44.858 10:56:34 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:44.858 10:56:34 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.858 10:56:34 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:44.858 10:56:34 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:44.858 10:56:34 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:44.858 10:56:34 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:44.858 10:56:34 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:44.858 10:56:34 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:44.858 10:56:34 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:45.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.681 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:45.681 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:45.681 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:45.681 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:45.681 10:56:34 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:45.681 10:56:34 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76485 00:19:45.681 10:56:34 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76485 00:19:45.681 10:56:34 ftl -- common/autotest_common.sh@835 -- # '[' -z 76485 ']' 00:19:45.681 10:56:34 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.681 10:56:34 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.681 10:56:34 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.681 10:56:34 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.681 10:56:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:45.939 [2024-11-20 10:56:35.003675] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:19:45.939 [2024-11-20 10:56:35.003794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76485 ] 00:19:45.939 [2024-11-20 10:56:35.181817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.196 [2024-11-20 10:56:35.285926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.760 10:56:35 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.761 10:56:35 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:46.761 10:56:35 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:46.761 10:56:35 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:48.133 10:56:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:48.133 10:56:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@50 -- # break 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:48.391 10:56:37 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:48.649 10:56:37 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:48.649 10:56:37 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:48.649 10:56:37 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:48.649 10:56:37 ftl -- ftl/ftl.sh@63 -- # break 00:19:48.649 10:56:37 ftl -- ftl/ftl.sh@66 -- # killprocess 76485 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@954 -- # '[' -z 76485 ']' 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@958 -- # kill -0 76485 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@959 -- # uname 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76485 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.649 killing process with pid 76485 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76485' 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@973 -- # kill 76485 00:19:48.649 10:56:37 ftl -- common/autotest_common.sh@978 -- # wait 76485 00:19:51.184 10:56:40 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:51.184 10:56:40 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:51.184 10:56:40 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:51.184 10:56:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.184 10:56:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:51.184 ************************************ 00:19:51.184 START TEST ftl_fio_basic 00:19:51.184 ************************************ 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:51.184 * Looking for test storage... 00:19:51.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:51.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.184 --rc genhtml_branch_coverage=1 00:19:51.184 --rc genhtml_function_coverage=1 00:19:51.184 --rc genhtml_legend=1 00:19:51.184 --rc geninfo_all_blocks=1 00:19:51.184 --rc geninfo_unexecuted_blocks=1 00:19:51.184 00:19:51.184 ' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:51.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.184 --rc genhtml_branch_coverage=1 00:19:51.184 --rc genhtml_function_coverage=1 00:19:51.184 --rc genhtml_legend=1 00:19:51.184 --rc geninfo_all_blocks=1 00:19:51.184 --rc geninfo_unexecuted_blocks=1 00:19:51.184 00:19:51.184 ' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:51.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.184 --rc genhtml_branch_coverage=1 00:19:51.184 --rc genhtml_function_coverage=1 00:19:51.184 --rc genhtml_legend=1 00:19:51.184 --rc geninfo_all_blocks=1 00:19:51.184 --rc geninfo_unexecuted_blocks=1 00:19:51.184 00:19:51.184 ' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:51.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.184 --rc genhtml_branch_coverage=1 00:19:51.184 --rc genhtml_function_coverage=1 00:19:51.184 --rc genhtml_legend=1 00:19:51.184 --rc geninfo_all_blocks=1 00:19:51.184 --rc geninfo_unexecuted_blocks=1 00:19:51.184 00:19:51.184 ' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76628 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76628 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76628 ']' 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.184 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.185 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.185 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.185 10:56:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:51.185 [2024-11-20 10:56:40.425148] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:19:51.185 [2024-11-20 10:56:40.425262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76628 ] 00:19:51.443 [2024-11-20 10:56:40.606463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:51.701 [2024-11-20 10:56:40.714061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.701 [2024-11-20 10:56:40.714243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.701 [2024-11-20 10:56:40.714428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:52.638 10:56:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:52.896 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:52.896 { 00:19:52.896 "name": "nvme0n1", 00:19:52.896 "aliases": [ 00:19:52.896 "4799c72a-9487-4ec7-9757-3f4d9d351ef1" 00:19:52.896 ], 00:19:52.896 "product_name": "NVMe disk", 00:19:52.896 "block_size": 4096, 00:19:52.896 "num_blocks": 1310720, 00:19:52.896 "uuid": "4799c72a-9487-4ec7-9757-3f4d9d351ef1", 00:19:52.896 "numa_id": -1, 00:19:52.896 "assigned_rate_limits": { 00:19:52.896 "rw_ios_per_sec": 0, 00:19:52.896 "rw_mbytes_per_sec": 0, 00:19:52.896 "r_mbytes_per_sec": 0, 00:19:52.896 "w_mbytes_per_sec": 0 00:19:52.896 }, 00:19:52.896 "claimed": false, 00:19:52.896 "zoned": false, 00:19:52.896 "supported_io_types": { 00:19:52.896 "read": true, 00:19:52.896 "write": true, 00:19:52.896 "unmap": true, 00:19:52.896 "flush": true, 00:19:52.896 "reset": true, 00:19:52.896 "nvme_admin": true, 00:19:52.896 "nvme_io": true, 00:19:52.896 "nvme_io_md": false, 00:19:52.896 "write_zeroes": true, 00:19:52.896 "zcopy": false, 00:19:52.896 "get_zone_info": false, 00:19:52.896 "zone_management": false, 00:19:52.896 "zone_append": false, 00:19:52.896 "compare": true, 00:19:52.896 "compare_and_write": false, 00:19:52.896 "abort": true, 00:19:52.896 "seek_hole": false, 00:19:52.896 "seek_data": false, 00:19:52.896 "copy": true, 00:19:52.896 "nvme_iov_md": false 00:19:52.896 }, 00:19:52.896 "driver_specific": { 00:19:52.896 "nvme": [ 00:19:52.896 { 00:19:52.896 "pci_address": "0000:00:11.0", 00:19:52.896 "trid": { 00:19:52.896 "trtype": "PCIe", 00:19:52.896 "traddr": "0000:00:11.0" 00:19:52.896 }, 00:19:52.896 "ctrlr_data": { 00:19:52.896 "cntlid": 0, 00:19:52.897 "vendor_id": "0x1b36", 00:19:52.897 "model_number": "QEMU NVMe Ctrl", 00:19:52.897 "serial_number": "12341", 00:19:52.897 "firmware_revision": "8.0.0", 00:19:52.897 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:52.897 "oacs": { 00:19:52.897 "security": 0, 00:19:52.897 "format": 1, 00:19:52.897 "firmware": 0, 00:19:52.897 "ns_manage": 1 00:19:52.897 }, 00:19:52.897 "multi_ctrlr": false, 00:19:52.897 "ana_reporting": false 00:19:52.897 }, 00:19:52.897 "vs": { 00:19:52.897 "nvme_version": "1.4" 00:19:52.897 }, 00:19:52.897 "ns_data": { 00:19:52.897 "id": 1, 00:19:52.897 "can_share": false 00:19:52.897 } 00:19:52.897 } 00:19:52.897 ], 00:19:52.897 "mp_policy": "active_passive" 00:19:52.897 } 00:19:52.897 } 00:19:52.897 ]' 00:19:52.897 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:52.897 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:52.897 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:53.155 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:53.412 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ed92a18f-bd9a-45bb-b318-dec84eb16760 00:19:53.412 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ed92a18f-bd9a-45bb-b318-dec84eb16760 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:53.670 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:53.928 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:53.928 { 00:19:53.928 "name": "a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8", 00:19:53.928 "aliases": [ 00:19:53.928 "lvs/nvme0n1p0" 00:19:53.928 ], 00:19:53.928 "product_name": "Logical Volume", 00:19:53.928 "block_size": 4096, 00:19:53.929 "num_blocks": 26476544, 00:19:53.929 "uuid": "a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8", 00:19:53.929 "assigned_rate_limits": { 00:19:53.929 "rw_ios_per_sec": 0, 00:19:53.929 "rw_mbytes_per_sec": 0, 00:19:53.929 "r_mbytes_per_sec": 0, 00:19:53.929 "w_mbytes_per_sec": 0 00:19:53.929 }, 00:19:53.929 "claimed": false, 00:19:53.929 "zoned": false, 00:19:53.929 "supported_io_types": { 00:19:53.929 "read": true, 00:19:53.929 "write": true, 00:19:53.929 "unmap": true, 00:19:53.929 "flush": false, 00:19:53.929 "reset": true, 00:19:53.929 "nvme_admin": false, 00:19:53.929 "nvme_io": false, 00:19:53.929 "nvme_io_md": false, 00:19:53.929 "write_zeroes": true, 00:19:53.929 "zcopy": false, 00:19:53.929 "get_zone_info": false, 00:19:53.929 "zone_management": false, 00:19:53.929 "zone_append": false, 00:19:53.929 "compare": false, 00:19:53.929 "compare_and_write": false, 00:19:53.929 "abort": false, 00:19:53.929 "seek_hole": true, 00:19:53.929 "seek_data": true, 00:19:53.929 "copy": false, 00:19:53.929 "nvme_iov_md": false 00:19:53.929 }, 00:19:53.929 "driver_specific": { 00:19:53.929 "lvol": { 00:19:53.929 "lvol_store_uuid": "ed92a18f-bd9a-45bb-b318-dec84eb16760", 00:19:53.929 "base_bdev": "nvme0n1", 00:19:53.929 "thin_provision": true, 00:19:53.929 "num_allocated_clusters": 0, 00:19:53.929 "snapshot": false, 00:19:53.929 "clone": false, 00:19:53.929 "esnap_clone": false 00:19:53.929 } 00:19:53.929 } 00:19:53.929 } 00:19:53.929 ]' 00:19:53.929 10:56:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:53.929 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:53.929 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:53.929 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:53.929 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:53.929 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:53.929 10:56:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:19:53.929 10:56:43 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:19:53.929 10:56:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:54.188 10:56:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:54.188 10:56:43 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:54.188 10:56:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:54.188 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:54.188 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:54.188 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:54.188 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:54.188 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:54.446 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:54.446 { 00:19:54.446 "name": "a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8", 00:19:54.446 "aliases": [ 00:19:54.446 "lvs/nvme0n1p0" 00:19:54.446 ], 00:19:54.446 "product_name": "Logical Volume", 00:19:54.446 "block_size": 4096, 00:19:54.446 "num_blocks": 26476544, 00:19:54.446 "uuid": "a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8", 00:19:54.446 "assigned_rate_limits": { 00:19:54.446 "rw_ios_per_sec": 0, 00:19:54.446 "rw_mbytes_per_sec": 0, 00:19:54.446 "r_mbytes_per_sec": 0, 00:19:54.446 "w_mbytes_per_sec": 0 00:19:54.446 }, 00:19:54.446 "claimed": false, 00:19:54.446 "zoned": false, 00:19:54.446 "supported_io_types": { 00:19:54.446 "read": true, 00:19:54.446 "write": true, 00:19:54.446 "unmap": true, 00:19:54.446 "flush": false, 00:19:54.446 "reset": true, 00:19:54.446 "nvme_admin": false, 00:19:54.446 "nvme_io": false, 00:19:54.446 "nvme_io_md": false, 00:19:54.446 "write_zeroes": true, 00:19:54.446 "zcopy": false, 00:19:54.446 "get_zone_info": false, 00:19:54.446 "zone_management": false, 00:19:54.446 "zone_append": false, 00:19:54.446 "compare": false, 00:19:54.446 "compare_and_write": false, 00:19:54.446 "abort": false, 00:19:54.446 "seek_hole": true, 00:19:54.446 "seek_data": true, 00:19:54.446 "copy": false, 00:19:54.446 "nvme_iov_md": false 00:19:54.446 }, 00:19:54.446 "driver_specific": { 00:19:54.446 "lvol": { 00:19:54.446 "lvol_store_uuid": "ed92a18f-bd9a-45bb-b318-dec84eb16760", 00:19:54.446 "base_bdev": "nvme0n1", 00:19:54.446 "thin_provision": true, 00:19:54.446 "num_allocated_clusters": 0, 00:19:54.446 "snapshot": false, 00:19:54.446 "clone": false, 00:19:54.446 "esnap_clone": false 00:19:54.446 } 00:19:54.446 } 00:19:54.446 } 00:19:54.446 ]' 00:19:54.446 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:54.446 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:54.446 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:54.446 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:54.446 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:54.446 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:54.447 10:56:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:19:54.447 10:56:43 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:19:54.704 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:54.704 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 00:19:54.963 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:54.963 { 00:19:54.963 "name": "a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8", 00:19:54.963 "aliases": [ 00:19:54.963 "lvs/nvme0n1p0" 00:19:54.963 ], 00:19:54.963 "product_name": "Logical Volume", 00:19:54.963 "block_size": 4096, 00:19:54.963 "num_blocks": 26476544, 00:19:54.963 "uuid": "a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8", 00:19:54.963 "assigned_rate_limits": { 00:19:54.963 "rw_ios_per_sec": 0, 00:19:54.963 "rw_mbytes_per_sec": 0, 00:19:54.963 "r_mbytes_per_sec": 0, 00:19:54.963 "w_mbytes_per_sec": 0 00:19:54.963 }, 00:19:54.963 "claimed": false, 00:19:54.963 "zoned": false, 00:19:54.963 "supported_io_types": { 00:19:54.963 "read": true, 00:19:54.963 "write": true, 00:19:54.963 "unmap": true, 00:19:54.963 "flush": false, 00:19:54.963 "reset": true, 00:19:54.963 "nvme_admin": false, 00:19:54.963 "nvme_io": false, 00:19:54.963 "nvme_io_md": false, 00:19:54.963 "write_zeroes": true, 00:19:54.963 "zcopy": false, 00:19:54.963 "get_zone_info": false, 00:19:54.963 "zone_management": false, 00:19:54.963 "zone_append": false, 00:19:54.963 "compare": false, 00:19:54.963 "compare_and_write": false, 00:19:54.963 "abort": false, 00:19:54.963 "seek_hole": true, 00:19:54.963 "seek_data": true, 00:19:54.963 "copy": false, 00:19:54.963 "nvme_iov_md": false 00:19:54.963 }, 00:19:54.963 "driver_specific": { 00:19:54.963 "lvol": { 00:19:54.963 "lvol_store_uuid": "ed92a18f-bd9a-45bb-b318-dec84eb16760", 00:19:54.963 "base_bdev": "nvme0n1", 00:19:54.963 "thin_provision": true, 00:19:54.963 "num_allocated_clusters": 0, 00:19:54.963 "snapshot": false, 00:19:54.963 "clone": false, 00:19:54.963 "esnap_clone": false 00:19:54.963 } 00:19:54.963 } 00:19:54.963 } 00:19:54.963 ]' 00:19:54.963 10:56:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:54.963 10:56:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:54.963 10:56:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:54.963 10:56:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:54.963 10:56:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:54.963 10:56:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:19:54.963 10:56:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:19:54.963 10:56:44 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:19:54.963 10:56:44 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8 -c nvc0n1p0 --l2p_dram_limit 60 00:19:55.222 [2024-11-20 10:56:44.241178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.222 [2024-11-20 10:56:44.241229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.222 [2024-11-20 10:56:44.241248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:55.222 [2024-11-20 10:56:44.241259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.222 [2024-11-20 10:56:44.241352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.222 [2024-11-20 10:56:44.241367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.222 [2024-11-20 10:56:44.241380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:55.222 [2024-11-20 10:56:44.241390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.222 [2024-11-20 10:56:44.241439] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.222 [2024-11-20 10:56:44.242446] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.222 [2024-11-20 10:56:44.242492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.222 [2024-11-20 10:56:44.242503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.222 [2024-11-20 10:56:44.242516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:19:55.222 [2024-11-20 10:56:44.242526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.222 [2024-11-20 10:56:44.242631] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 459c5d58-cb44-4c3d-8a05-dc8ddb9c1990 00:19:55.222 [2024-11-20 10:56:44.244102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.222 [2024-11-20 10:56:44.244144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:55.222 [2024-11-20 10:56:44.244156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:55.222 [2024-11-20 10:56:44.244169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.222 [2024-11-20 10:56:44.251676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.222 [2024-11-20 10:56:44.251712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.222 [2024-11-20 10:56:44.251740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.435 ms 00:19:55.222 [2024-11-20 10:56:44.251753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.222 [2024-11-20 10:56:44.251875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.222 [2024-11-20 10:56:44.251892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.222 [2024-11-20 10:56:44.251903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:55.222 [2024-11-20 10:56:44.251920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.222 [2024-11-20 10:56:44.252009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.222 [2024-11-20 10:56:44.252028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.222 [2024-11-20 10:56:44.252039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:55.222 [2024-11-20 10:56:44.252052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.222 [2024-11-20 10:56:44.252095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.222 [2024-11-20 10:56:44.257115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.222 [2024-11-20 10:56:44.257148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.222 [2024-11-20 10:56:44.257180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.033 ms 00:19:55.223 [2024-11-20 10:56:44.257194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.223 [2024-11-20 10:56:44.257249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.223 [2024-11-20 10:56:44.257261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.223 [2024-11-20 10:56:44.257273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:55.223 [2024-11-20 10:56:44.257283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.223 [2024-11-20 10:56:44.257358] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:55.223 [2024-11-20 10:56:44.257503] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:55.223 [2024-11-20 10:56:44.257524] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.223 [2024-11-20 10:56:44.257539] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:55.223 [2024-11-20 10:56:44.257555] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.223 [2024-11-20 10:56:44.257567] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.223 [2024-11-20 10:56:44.257581] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:55.223 [2024-11-20 10:56:44.257590] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.223 [2024-11-20 10:56:44.257621] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:55.223 [2024-11-20 10:56:44.257630] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:55.223 [2024-11-20 10:56:44.257644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.223 [2024-11-20 10:56:44.257657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.223 [2024-11-20 10:56:44.257672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:19:55.223 [2024-11-20 10:56:44.257683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.223 [2024-11-20 10:56:44.257767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.223 [2024-11-20 10:56:44.257778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.223 [2024-11-20 10:56:44.257790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:55.223 [2024-11-20 10:56:44.257800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.223 [2024-11-20 10:56:44.257907] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.223 [2024-11-20 10:56:44.257926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.223 [2024-11-20 10:56:44.257942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.223 [2024-11-20 10:56:44.257952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.223 [2024-11-20 10:56:44.257965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.223 [2024-11-20 10:56:44.257974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.223 [2024-11-20 10:56:44.257986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:55.223 [2024-11-20 10:56:44.257995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.223 [2024-11-20 10:56:44.258007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.223 [2024-11-20 10:56:44.258031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.223 [2024-11-20 10:56:44.258041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:55.223 [2024-11-20 10:56:44.258052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.223 [2024-11-20 10:56:44.258061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.223 [2024-11-20 10:56:44.258073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:55.223 [2024-11-20 10:56:44.258083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.223 [2024-11-20 10:56:44.258108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:55.223 [2024-11-20 10:56:44.258119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.223 [2024-11-20 10:56:44.258141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.223 [2024-11-20 10:56:44.258161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.223 [2024-11-20 10:56:44.258170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.223 [2024-11-20 10:56:44.258191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.223 [2024-11-20 10:56:44.258202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.223 [2024-11-20 10:56:44.258223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.223 [2024-11-20 10:56:44.258232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.223 [2024-11-20 10:56:44.258252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.223 [2024-11-20 10:56:44.258267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.223 [2024-11-20 10:56:44.258287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.223 [2024-11-20 10:56:44.258311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:55.223 [2024-11-20 10:56:44.258323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.223 [2024-11-20 10:56:44.258332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:55.223 [2024-11-20 10:56:44.258343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:55.223 [2024-11-20 10:56:44.258352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:55.223 [2024-11-20 10:56:44.258373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:55.223 [2024-11-20 10:56:44.258388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258397] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.223 [2024-11-20 10:56:44.258409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.223 [2024-11-20 10:56:44.258419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.223 [2024-11-20 10:56:44.258432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.223 [2024-11-20 10:56:44.258442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.223 [2024-11-20 10:56:44.258465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.223 [2024-11-20 10:56:44.258474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.223 [2024-11-20 10:56:44.258486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.223 [2024-11-20 10:56:44.258495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.223 [2024-11-20 10:56:44.258507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.223 [2024-11-20 10:56:44.258534] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.223 [2024-11-20 10:56:44.258553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.223 [2024-11-20 10:56:44.258565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:55.223 [2024-11-20 10:56:44.258578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:55.223 [2024-11-20 10:56:44.258588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:55.223 [2024-11-20 10:56:44.258612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:55.223 [2024-11-20 10:56:44.258622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:55.223 [2024-11-20 10:56:44.258635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:55.223 [2024-11-20 10:56:44.258645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:55.223 [2024-11-20 10:56:44.258658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:55.223 [2024-11-20 10:56:44.258668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:55.223 [2024-11-20 10:56:44.258684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:55.223 [2024-11-20 10:56:44.258694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:55.223 [2024-11-20 10:56:44.258707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:55.223 [2024-11-20 10:56:44.258718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:55.223 [2024-11-20 10:56:44.258730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:55.223 [2024-11-20 10:56:44.258740] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.223 [2024-11-20 10:56:44.258754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.223 [2024-11-20 10:56:44.258768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.223 [2024-11-20 10:56:44.258781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.223 [2024-11-20 10:56:44.258791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.223 [2024-11-20 10:56:44.258804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.223 [2024-11-20 10:56:44.258816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.223 [2024-11-20 10:56:44.258829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.223 [2024-11-20 10:56:44.258840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:19:55.223 [2024-11-20 10:56:44.258852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.223 [2024-11-20 10:56:44.258928] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:55.223 [2024-11-20 10:56:44.258945] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:01.786 [2024-11-20 10:56:50.252125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.252210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:01.786 [2024-11-20 10:56:50.252231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6002.922 ms 00:20:01.786 [2024-11-20 10:56:50.252244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.287756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.287813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.786 [2024-11-20 10:56:50.287829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.168 ms 00:20:01.786 [2024-11-20 10:56:50.287857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.287994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.288011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.786 [2024-11-20 10:56:50.288023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:01.786 [2024-11-20 10:56:50.288039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.348933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.348989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.786 [2024-11-20 10:56:50.349012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.939 ms 00:20:01.786 [2024-11-20 10:56:50.349032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.349083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.349101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.786 [2024-11-20 10:56:50.349115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:01.786 [2024-11-20 10:56:50.349131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.349721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.349753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.786 [2024-11-20 10:56:50.349768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:20:01.786 [2024-11-20 10:56:50.349789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.349945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.349966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.786 [2024-11-20 10:56:50.349980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:01.786 [2024-11-20 10:56:50.350001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.372066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.372108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.786 [2024-11-20 10:56:50.372122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.063 ms 00:20:01.786 [2024-11-20 10:56:50.372135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.384806] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:01.786 [2024-11-20 10:56:50.401272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.401328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:01.786 [2024-11-20 10:56:50.401362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.065 ms 00:20:01.786 [2024-11-20 10:56:50.401376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.493771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.493825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:01.786 [2024-11-20 10:56:50.493849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.485 ms 00:20:01.786 [2024-11-20 10:56:50.493860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.494090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.494111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:01.786 [2024-11-20 10:56:50.494129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:20:01.786 [2024-11-20 10:56:50.494139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.530521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.530568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:01.786 [2024-11-20 10:56:50.530601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.369 ms 00:20:01.786 [2024-11-20 10:56:50.530676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.565920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.565961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:01.786 [2024-11-20 10:56:50.565978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.247 ms 00:20:01.786 [2024-11-20 10:56:50.565988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.566724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.566755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.786 [2024-11-20 10:56:50.566770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:20:01.786 [2024-11-20 10:56:50.566780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.668037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.668084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:01.786 [2024-11-20 10:56:50.668106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.352 ms 00:20:01.786 [2024-11-20 10:56:50.668136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.704527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.704570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:01.786 [2024-11-20 10:56:50.704587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.343 ms 00:20:01.786 [2024-11-20 10:56:50.704621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.740359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.740399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:01.786 [2024-11-20 10:56:50.740414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.743 ms 00:20:01.786 [2024-11-20 10:56:50.740424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.776708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.776747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.786 [2024-11-20 10:56:50.776763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.275 ms 00:20:01.786 [2024-11-20 10:56:50.776773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.776844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.776856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.786 [2024-11-20 10:56:50.776874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:01.786 [2024-11-20 10:56:50.776887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.777062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.786 [2024-11-20 10:56:50.777077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.786 [2024-11-20 10:56:50.777092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:01.786 [2024-11-20 10:56:50.777102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.786 [2024-11-20 10:56:50.778217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 6547.218 ms, result 0 00:20:01.786 { 00:20:01.786 "name": "ftl0", 00:20:01.786 "uuid": "459c5d58-cb44-4c3d-8a05-dc8ddb9c1990" 00:20:01.786 } 00:20:01.786 10:56:50 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:20:01.786 10:56:50 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:01.786 10:56:50 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:01.786 10:56:50 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:20:01.786 10:56:50 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:01.786 10:56:50 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:01.786 10:56:50 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:01.786 10:56:51 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:02.045 [ 00:20:02.045 { 00:20:02.045 "name": "ftl0", 00:20:02.045 "aliases": [ 00:20:02.045 "459c5d58-cb44-4c3d-8a05-dc8ddb9c1990" 00:20:02.045 ], 00:20:02.045 "product_name": "FTL disk", 00:20:02.045 "block_size": 4096, 00:20:02.045 "num_blocks": 20971520, 00:20:02.045 "uuid": "459c5d58-cb44-4c3d-8a05-dc8ddb9c1990", 00:20:02.045 "assigned_rate_limits": { 00:20:02.045 "rw_ios_per_sec": 0, 00:20:02.045 "rw_mbytes_per_sec": 0, 00:20:02.045 "r_mbytes_per_sec": 0, 00:20:02.045 "w_mbytes_per_sec": 0 00:20:02.045 }, 00:20:02.045 "claimed": false, 00:20:02.045 "zoned": false, 00:20:02.045 "supported_io_types": { 00:20:02.045 "read": true, 00:20:02.045 "write": true, 00:20:02.045 "unmap": true, 00:20:02.045 "flush": true, 00:20:02.045 "reset": false, 00:20:02.045 "nvme_admin": false, 00:20:02.045 "nvme_io": false, 00:20:02.045 "nvme_io_md": false, 00:20:02.045 "write_zeroes": true, 00:20:02.045 "zcopy": false, 00:20:02.045 "get_zone_info": false, 00:20:02.045 "zone_management": false, 00:20:02.045 "zone_append": false, 00:20:02.045 "compare": false, 00:20:02.045 "compare_and_write": false, 00:20:02.045 "abort": false, 00:20:02.045 "seek_hole": false, 00:20:02.045 "seek_data": false, 00:20:02.045 "copy": false, 00:20:02.045 "nvme_iov_md": false 00:20:02.045 }, 00:20:02.045 "driver_specific": { 00:20:02.045 "ftl": { 00:20:02.045 "base_bdev": "a4a99f81-a7b8-49e9-a8f7-ffa4d2bac3f8", 00:20:02.045 "cache": "nvc0n1p0" 00:20:02.045 } 00:20:02.045 } 00:20:02.045 } 00:20:02.045 ] 00:20:02.045 10:56:51 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:20:02.045 10:56:51 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:20:02.045 10:56:51 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:02.304 10:56:51 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:20:02.304 10:56:51 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:02.563 [2024-11-20 10:56:51.617320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.563 [2024-11-20 10:56:51.617373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.563 [2024-11-20 10:56:51.617389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:02.563 [2024-11-20 10:56:51.617402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.563 [2024-11-20 10:56:51.617441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.563 [2024-11-20 10:56:51.621719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.563 [2024-11-20 10:56:51.621751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.563 [2024-11-20 10:56:51.621767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.261 ms 00:20:02.563 [2024-11-20 10:56:51.621777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.563 [2024-11-20 10:56:51.622229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.563 [2024-11-20 10:56:51.622249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:02.563 [2024-11-20 10:56:51.622263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:20:02.563 [2024-11-20 10:56:51.622273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.563 [2024-11-20 10:56:51.624792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.563 [2024-11-20 10:56:51.624822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:02.563 [2024-11-20 10:56:51.624837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.497 ms 00:20:02.563 [2024-11-20 10:56:51.624847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.564 [2024-11-20 10:56:51.629867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.564 [2024-11-20 10:56:51.629900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:02.564 [2024-11-20 10:56:51.629914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.998 ms 00:20:02.564 [2024-11-20 10:56:51.629924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.564 [2024-11-20 10:56:51.666542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.564 [2024-11-20 10:56:51.666604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:02.564 [2024-11-20 10:56:51.666622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.580 ms 00:20:02.564 [2024-11-20 10:56:51.666648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.564 [2024-11-20 10:56:51.688322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.564 [2024-11-20 10:56:51.688363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:02.564 [2024-11-20 10:56:51.688396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.635 ms 00:20:02.564 [2024-11-20 10:56:51.688410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.564 [2024-11-20 10:56:51.688619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.564 [2024-11-20 10:56:51.688634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:02.564 [2024-11-20 10:56:51.688648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:02.564 [2024-11-20 10:56:51.688659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.564 [2024-11-20 10:56:51.723993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.564 [2024-11-20 10:56:51.724033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:02.564 [2024-11-20 10:56:51.724048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.361 ms 00:20:02.564 [2024-11-20 10:56:51.724058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.564 [2024-11-20 10:56:51.759920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.564 [2024-11-20 10:56:51.759958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:02.564 [2024-11-20 10:56:51.759974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.850 ms 00:20:02.564 [2024-11-20 10:56:51.759984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.564 [2024-11-20 10:56:51.794510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.564 [2024-11-20 10:56:51.794548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:02.564 [2024-11-20 10:56:51.794563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.512 ms 00:20:02.564 [2024-11-20 10:56:51.794572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.824 [2024-11-20 10:56:51.830340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.824 [2024-11-20 10:56:51.830379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:02.824 [2024-11-20 10:56:51.830394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.679 ms 00:20:02.824 [2024-11-20 10:56:51.830404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.824 [2024-11-20 10:56:51.830478] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:02.824 [2024-11-20 10:56:51.830494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.830988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:02.824 [2024-11-20 10:56:51.831143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:02.825 [2024-11-20 10:56:51.831738] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:02.825 [2024-11-20 10:56:51.831750] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 459c5d58-cb44-4c3d-8a05-dc8ddb9c1990 00:20:02.825 [2024-11-20 10:56:51.831761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:02.825 [2024-11-20 10:56:51.831776] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:02.825 [2024-11-20 10:56:51.831785] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:02.825 [2024-11-20 10:56:51.831800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:02.825 [2024-11-20 10:56:51.831810] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:02.825 [2024-11-20 10:56:51.831822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:02.825 [2024-11-20 10:56:51.831833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:02.825 [2024-11-20 10:56:51.831844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:02.825 [2024-11-20 10:56:51.831853] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:02.825 [2024-11-20 10:56:51.831865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.825 [2024-11-20 10:56:51.831875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:02.825 [2024-11-20 10:56:51.831888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:20:02.825 [2024-11-20 10:56:51.831898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.825 [2024-11-20 10:56:51.851867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.825 [2024-11-20 10:56:51.851906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:02.825 [2024-11-20 10:56:51.851922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.938 ms 00:20:02.825 [2024-11-20 10:56:51.851932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.825 [2024-11-20 10:56:51.852479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.825 [2024-11-20 10:56:51.852499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:02.825 [2024-11-20 10:56:51.852513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:20:02.825 [2024-11-20 10:56:51.852523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.825 [2024-11-20 10:56:51.920301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.825 [2024-11-20 10:56:51.920345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.825 [2024-11-20 10:56:51.920377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.825 [2024-11-20 10:56:51.920388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.825 [2024-11-20 10:56:51.920452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.825 [2024-11-20 10:56:51.920462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.825 [2024-11-20 10:56:51.920475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.825 [2024-11-20 10:56:51.920485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.825 [2024-11-20 10:56:51.920607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.825 [2024-11-20 10:56:51.920622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.825 [2024-11-20 10:56:51.920639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.825 [2024-11-20 10:56:51.920649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.825 [2024-11-20 10:56:51.920682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.825 [2024-11-20 10:56:51.920693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.825 [2024-11-20 10:56:51.920705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.825 [2024-11-20 10:56:51.920715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.825 [2024-11-20 10:56:52.048686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.825 [2024-11-20 10:56:52.048747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.825 [2024-11-20 10:56:52.048781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.825 [2024-11-20 10:56:52.048792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.084 [2024-11-20 10:56:52.147092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.084 [2024-11-20 10:56:52.147146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.084 [2024-11-20 10:56:52.147179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.084 [2024-11-20 10:56:52.147191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.084 [2024-11-20 10:56:52.147310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.084 [2024-11-20 10:56:52.147322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.085 [2024-11-20 10:56:52.147335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-11-20 10:56:52.147349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-11-20 10:56:52.147423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-11-20 10:56:52.147435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.085 [2024-11-20 10:56:52.147448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-11-20 10:56:52.147458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-11-20 10:56:52.147585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-11-20 10:56:52.147616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.085 [2024-11-20 10:56:52.147629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-11-20 10:56:52.147640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-11-20 10:56:52.147706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-11-20 10:56:52.147718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.085 [2024-11-20 10:56:52.147731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-11-20 10:56:52.147740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-11-20 10:56:52.147794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-11-20 10:56:52.147805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.085 [2024-11-20 10:56:52.147817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-11-20 10:56:52.147827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-11-20 10:56:52.147888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-11-20 10:56:52.147899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.085 [2024-11-20 10:56:52.147912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-11-20 10:56:52.147922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-11-20 10:56:52.148095] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.603 ms, result 0 00:20:03.085 true 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76628 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76628 ']' 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76628 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76628 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.085 killing process with pid 76628 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76628' 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76628 00:20:03.085 10:56:52 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76628 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:08.352 10:56:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:08.352 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:08.352 fio-3.35 00:20:08.352 Starting 1 thread 00:20:13.619 00:20:13.619 test: (groupid=0, jobs=1): err= 0: pid=76869: Wed Nov 20 10:57:02 2024 00:20:13.619 read: IOPS=921, BW=61.2MiB/s (64.2MB/s)(255MiB/4158msec) 00:20:13.619 slat (usec): min=4, max=105, avg= 5.87, stdev= 2.71 00:20:13.619 clat (usec): min=319, max=1019, avg=496.97, stdev=48.60 00:20:13.619 lat (usec): min=330, max=1025, avg=502.84, stdev=48.92 00:20:13.619 clat percentiles (usec): 00:20:13.619 | 1.00th=[ 388], 5.00th=[ 400], 10.00th=[ 449], 20.00th=[ 457], 00:20:13.619 | 30.00th=[ 465], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 523], 00:20:13.619 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 545], 95.00th=[ 553], 00:20:13.619 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 676], 99.95th=[ 750], 00:20:13.619 | 99.99th=[ 1020] 00:20:13.619 write: IOPS=928, BW=61.6MiB/s (64.6MB/s)(256MiB/4154msec); 0 zone resets 00:20:13.619 slat (nsec): min=15612, max=66239, avg=19147.78, stdev=4053.90 00:20:13.619 clat (usec): min=394, max=1216, avg=548.62, stdev=70.93 00:20:13.619 lat (usec): min=417, max=1254, avg=567.77, stdev=71.30 00:20:13.619 clat percentiles (usec): 00:20:13.619 | 1.00th=[ 408], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 494], 00:20:13.619 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 545], 60.00th=[ 553], 00:20:13.619 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 627], 00:20:13.619 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 1029], 00:20:13.619 | 99.99th=[ 1221] 00:20:13.619 bw ( KiB/s): min=59160, max=65416, per=100.00%, avg=63172.00, stdev=1844.79, samples=8 00:20:13.619 iops : min= 870, max= 962, avg=929.00, stdev=27.13, samples=8 00:20:13.619 lat (usec) : 500=35.67%, 750=63.23%, 1000=1.04% 00:20:13.619 lat (msec) : 2=0.05% 00:20:13.619 cpu : usr=99.30%, sys=0.10%, ctx=7, majf=0, minf=1169 00:20:13.619 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:13.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:13.619 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:13.619 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:13.619 00:20:13.619 Run status group 0 (all jobs): 00:20:13.619 READ: bw=61.2MiB/s (64.2MB/s), 61.2MiB/s-61.2MiB/s (64.2MB/s-64.2MB/s), io=255MiB (267MB), run=4158-4158msec 00:20:13.619 WRITE: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=256MiB (269MB), run=4154-4154msec 00:20:15.522 ----------------------------------------------------- 00:20:15.522 Suppressions used: 00:20:15.522 count bytes template 00:20:15.522 1 5 /usr/src/fio/parse.c 00:20:15.522 1 8 libtcmalloc_minimal.so 00:20:15.522 1 904 libcrypto.so 00:20:15.522 ----------------------------------------------------- 00:20:15.522 00:20:15.522 10:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:15.522 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.522 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:15.522 10:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:15.522 10:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:15.522 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.522 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:15.523 10:57:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:15.781 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:15.781 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:15.781 fio-3.35 00:20:15.781 Starting 2 threads 00:20:42.318 00:20:42.318 first_half: (groupid=0, jobs=1): err= 0: pid=76972: Wed Nov 20 10:57:30 2024 00:20:42.318 read: IOPS=2716, BW=10.6MiB/s (11.1MB/s)(255MiB/24003msec) 00:20:42.318 slat (nsec): min=3386, max=99431, avg=5819.25, stdev=1933.20 00:20:42.318 clat (usec): min=672, max=268609, avg=34563.90, stdev=16416.91 00:20:42.318 lat (usec): min=678, max=268614, avg=34569.72, stdev=16417.05 00:20:42.318 clat percentiles (msec): 00:20:42.318 | 1.00th=[ 6], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:20:42.318 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:20:42.318 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 38], 00:20:42.318 | 99.00th=[ 133], 99.50th=[ 155], 99.90th=[ 205], 99.95th=[ 230], 00:20:42.318 | 99.99th=[ 262] 00:20:42.318 write: IOPS=4068, BW=15.9MiB/s (16.7MB/s)(256MiB/16108msec); 0 zone resets 00:20:42.318 slat (usec): min=4, max=340, avg= 7.57, stdev= 4.71 00:20:42.318 clat (usec): min=350, max=103333, avg=12476.05, stdev=22434.22 00:20:42.318 lat (usec): min=380, max=103355, avg=12483.61, stdev=22434.29 00:20:42.318 clat percentiles (usec): 00:20:42.318 | 1.00th=[ 996], 5.00th=[ 1319], 10.00th=[ 1500], 20.00th=[ 1729], 00:20:42.318 | 30.00th=[ 1942], 40.00th=[ 2311], 50.00th=[ 4228], 60.00th=[ 5800], 00:20:42.318 | 70.00th=[ 6980], 80.00th=[ 11338], 90.00th=[ 66323], 95.00th=[ 76022], 00:20:42.318 | 99.00th=[ 85459], 99.50th=[ 89654], 99.90th=[ 98042], 99.95th=[100140], 00:20:42.318 | 99.99th=[102237] 00:20:42.318 bw ( KiB/s): min= 2232, max=41744, per=88.83%, avg=23831.27, stdev=11936.73, samples=22 00:20:42.318 iops : min= 558, max=10436, avg=5957.82, stdev=2984.18, samples=22 00:20:42.318 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.48% 00:20:42.318 lat (msec) : 2=15.81%, 4=8.79%, 10=14.09%, 20=6.22%, 50=47.58% 00:20:42.318 lat (msec) : 100=6.11%, 250=0.86%, 500=0.01% 00:20:42.318 cpu : usr=99.13%, sys=0.15%, ctx=40, majf=0, minf=5595 00:20:42.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:42.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.318 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:42.318 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:42.318 second_half: (groupid=0, jobs=1): err= 0: pid=76973: Wed Nov 20 10:57:30 2024 00:20:42.318 read: IOPS=2700, BW=10.5MiB/s (11.1MB/s)(255MiB/24150msec) 00:20:42.318 slat (nsec): min=3375, max=50580, avg=5752.91, stdev=1817.74 00:20:42.318 clat (usec): min=872, max=274031, avg=33591.10, stdev=15385.48 00:20:42.318 lat (usec): min=878, max=274039, avg=33596.85, stdev=15385.69 00:20:42.318 clat percentiles (msec): 00:20:42.319 | 1.00th=[ 6], 5.00th=[ 22], 10.00th=[ 31], 20.00th=[ 32], 00:20:42.319 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:20:42.319 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 38], 00:20:42.319 | 99.00th=[ 123], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 176], 00:20:42.319 | 99.99th=[ 266] 00:20:42.319 write: IOPS=3353, BW=13.1MiB/s (13.7MB/s)(256MiB/19542msec); 0 zone resets 00:20:42.319 slat (usec): min=4, max=229, avg= 7.60, stdev= 3.60 00:20:42.319 clat (usec): min=416, max=104424, avg=13722.77, stdev=22998.01 00:20:42.319 lat (usec): min=428, max=104431, avg=13730.37, stdev=22998.20 00:20:42.319 clat percentiles (usec): 00:20:42.319 | 1.00th=[ 881], 5.00th=[ 1139], 10.00th=[ 1369], 20.00th=[ 1647], 00:20:42.319 | 30.00th=[ 1909], 40.00th=[ 3195], 50.00th=[ 5014], 60.00th=[ 6259], 00:20:42.319 | 70.00th=[ 9503], 80.00th=[ 12518], 90.00th=[ 66847], 95.00th=[ 77071], 00:20:42.319 | 99.00th=[ 87557], 99.50th=[ 92799], 99.90th=[100140], 99.95th=[101188], 00:20:42.319 | 99.99th=[102237] 00:20:42.319 bw ( KiB/s): min= 168, max=42416, per=67.38%, avg=18078.90, stdev=13421.88, samples=29 00:20:42.319 iops : min= 42, max=10604, avg=4519.72, stdev=3355.47, samples=29 00:20:42.319 lat (usec) : 500=0.01%, 750=0.16%, 1000=0.97% 00:20:42.319 lat (msec) : 2=15.21%, 4=6.42%, 10=15.11%, 20=7.52%, 50=47.59% 00:20:42.319 lat (msec) : 100=6.26%, 250=0.76%, 500=0.01% 00:20:42.319 cpu : usr=99.33%, sys=0.14%, ctx=165, majf=0, minf=5506 00:20:42.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:42.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:42.319 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:42.319 issued rwts: total=65211,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:42.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:42.319 00:20:42.319 Run status group 0 (all jobs): 00:20:42.319 READ: bw=21.1MiB/s (22.1MB/s), 10.5MiB/s-10.6MiB/s (11.1MB/s-11.1MB/s), io=509MiB (534MB), run=24003-24150msec 00:20:42.319 WRITE: bw=26.2MiB/s (27.5MB/s), 13.1MiB/s-15.9MiB/s (13.7MB/s-16.7MB/s), io=512MiB (537MB), run=16108-19542msec 00:20:43.697 ----------------------------------------------------- 00:20:43.697 Suppressions used: 00:20:43.697 count bytes template 00:20:43.697 2 10 /usr/src/fio/parse.c 00:20:43.697 2 192 /usr/src/fio/iolog.c 00:20:43.697 1 8 libtcmalloc_minimal.so 00:20:43.697 1 904 libcrypto.so 00:20:43.697 ----------------------------------------------------- 00:20:43.697 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:43.697 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:43.698 10:57:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:43.698 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:43.698 fio-3.35 00:20:43.698 Starting 1 thread 00:20:58.583 00:20:58.583 test: (groupid=0, jobs=1): err= 0: pid=77295: Wed Nov 20 10:57:46 2024 00:20:58.583 read: IOPS=8351, BW=32.6MiB/s (34.2MB/s)(255MiB/7807msec) 00:20:58.583 slat (nsec): min=3329, max=33535, avg=4801.24, stdev=1326.62 00:20:58.583 clat (usec): min=569, max=29519, avg=15318.42, stdev=897.71 00:20:58.583 lat (usec): min=573, max=29524, avg=15323.22, stdev=897.73 00:20:58.583 clat percentiles (usec): 00:20:58.583 | 1.00th=[14484], 5.00th=[14746], 10.00th=[14746], 20.00th=[14877], 00:20:58.583 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15139], 60.00th=[15270], 00:20:58.583 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:20:58.583 | 99.00th=[18482], 99.50th=[19006], 99.90th=[24511], 99.95th=[26608], 00:20:58.583 | 99.99th=[28967] 00:20:58.583 write: IOPS=14.2k, BW=55.5MiB/s (58.2MB/s)(256MiB/4612msec); 0 zone resets 00:20:58.583 slat (usec): min=4, max=1115, avg= 7.07, stdev= 7.30 00:20:58.583 clat (usec): min=544, max=50191, avg=8962.42, stdev=10804.13 00:20:58.583 lat (usec): min=552, max=50198, avg=8969.50, stdev=10804.16 00:20:58.583 clat percentiles (usec): 00:20:58.583 | 1.00th=[ 889], 5.00th=[ 1029], 10.00th=[ 1139], 20.00th=[ 1319], 00:20:58.583 | 30.00th=[ 1500], 40.00th=[ 1844], 50.00th=[ 6063], 60.00th=[ 6980], 00:20:58.583 | 70.00th=[ 7963], 80.00th=[ 9896], 90.00th=[32113], 95.00th=[33817], 00:20:58.583 | 99.00th=[35390], 99.50th=[36439], 99.90th=[38011], 99.95th=[40633], 00:20:58.583 | 99.99th=[46924] 00:20:58.583 bw ( KiB/s): min=10280, max=70576, per=92.24%, avg=52428.80, stdev=16647.98, samples=10 00:20:58.583 iops : min= 2570, max=17644, avg=13107.20, stdev=4162.00, samples=10 00:20:58.583 lat (usec) : 750=0.07%, 1000=1.87% 00:20:58.583 lat (msec) : 2=18.59%, 4=0.59%, 10=19.31%, 20=51.39%, 50=8.18% 00:20:58.583 lat (msec) : 100=0.01% 00:20:58.583 cpu : usr=98.93%, sys=0.31%, ctx=30, majf=0, minf=5565 00:20:58.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:58.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.583 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.583 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.583 00:20:58.583 Run status group 0 (all jobs): 00:20:58.583 READ: bw=32.6MiB/s (34.2MB/s), 32.6MiB/s-32.6MiB/s (34.2MB/s-34.2MB/s), io=255MiB (267MB), run=7807-7807msec 00:20:58.583 WRITE: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=256MiB (268MB), run=4612-4612msec 00:20:59.519 ----------------------------------------------------- 00:20:59.519 Suppressions used: 00:20:59.519 count bytes template 00:20:59.519 1 5 /usr/src/fio/parse.c 00:20:59.519 2 192 /usr/src/fio/iolog.c 00:20:59.519 1 8 libtcmalloc_minimal.so 00:20:59.519 1 904 libcrypto.so 00:20:59.519 ----------------------------------------------------- 00:20:59.519 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:59.519 Remove shared memory files 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57674 /dev/shm/spdk_tgt_trace.pid75528 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:20:59.519 ************************************ 00:20:59.519 END TEST ftl_fio_basic 00:20:59.519 ************************************ 00:20:59.519 00:20:59.519 real 1m8.541s 00:20:59.519 user 2m29.754s 00:20:59.519 sys 0m3.841s 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.519 10:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:59.519 10:57:48 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:59.519 10:57:48 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:59.519 10:57:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:59.519 10:57:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:59.519 ************************************ 00:20:59.519 START TEST ftl_bdevperf 00:20:59.519 ************************************ 00:20:59.519 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:20:59.778 * Looking for test storage... 00:20:59.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:59.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.778 --rc genhtml_branch_coverage=1 00:20:59.778 --rc genhtml_function_coverage=1 00:20:59.778 --rc genhtml_legend=1 00:20:59.778 --rc geninfo_all_blocks=1 00:20:59.778 --rc geninfo_unexecuted_blocks=1 00:20:59.778 00:20:59.778 ' 00:20:59.778 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:59.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.778 --rc genhtml_branch_coverage=1 00:20:59.778 --rc genhtml_function_coverage=1 00:20:59.778 --rc genhtml_legend=1 00:20:59.778 --rc geninfo_all_blocks=1 00:20:59.779 --rc geninfo_unexecuted_blocks=1 00:20:59.779 00:20:59.779 ' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:59.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.779 --rc genhtml_branch_coverage=1 00:20:59.779 --rc genhtml_function_coverage=1 00:20:59.779 --rc genhtml_legend=1 00:20:59.779 --rc geninfo_all_blocks=1 00:20:59.779 --rc geninfo_unexecuted_blocks=1 00:20:59.779 00:20:59.779 ' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:59.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:59.779 --rc genhtml_branch_coverage=1 00:20:59.779 --rc genhtml_function_coverage=1 00:20:59.779 --rc genhtml_legend=1 00:20:59.779 --rc geninfo_all_blocks=1 00:20:59.779 --rc geninfo_unexecuted_blocks=1 00:20:59.779 00:20:59.779 ' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77522 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77522 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77522 ']' 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.779 10:57:48 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:00.039 [2024-11-20 10:57:49.074097] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:00.039 [2024-11-20 10:57:49.074216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77522 ] 00:21:00.039 [2024-11-20 10:57:49.246689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.297 [2024-11-20 10:57:49.399828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.895 10:57:49 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.895 10:57:49 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:00.895 10:57:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:00.895 10:57:49 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:00.895 10:57:49 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:00.895 10:57:49 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:00.895 10:57:49 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:00.895 10:57:49 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:01.155 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:01.155 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:01.156 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:01.156 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:01.156 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:01.156 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:01.156 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:01.156 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:01.156 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:01.156 { 00:21:01.156 "name": "nvme0n1", 00:21:01.156 "aliases": [ 00:21:01.156 "5a632b22-55d1-48cd-8079-261addb658fc" 00:21:01.156 ], 00:21:01.156 "product_name": "NVMe disk", 00:21:01.156 "block_size": 4096, 00:21:01.156 "num_blocks": 1310720, 00:21:01.156 "uuid": "5a632b22-55d1-48cd-8079-261addb658fc", 00:21:01.156 "numa_id": -1, 00:21:01.156 "assigned_rate_limits": { 00:21:01.156 "rw_ios_per_sec": 0, 00:21:01.156 "rw_mbytes_per_sec": 0, 00:21:01.156 "r_mbytes_per_sec": 0, 00:21:01.156 "w_mbytes_per_sec": 0 00:21:01.156 }, 00:21:01.156 "claimed": true, 00:21:01.156 "claim_type": "read_many_write_one", 00:21:01.156 "zoned": false, 00:21:01.156 "supported_io_types": { 00:21:01.156 "read": true, 00:21:01.156 "write": true, 00:21:01.156 "unmap": true, 00:21:01.156 "flush": true, 00:21:01.156 "reset": true, 00:21:01.156 "nvme_admin": true, 00:21:01.156 "nvme_io": true, 00:21:01.156 "nvme_io_md": false, 00:21:01.156 "write_zeroes": true, 00:21:01.156 "zcopy": false, 00:21:01.156 "get_zone_info": false, 00:21:01.156 "zone_management": false, 00:21:01.156 "zone_append": false, 00:21:01.156 "compare": true, 00:21:01.156 "compare_and_write": false, 00:21:01.156 "abort": true, 00:21:01.156 "seek_hole": false, 00:21:01.156 "seek_data": false, 00:21:01.156 "copy": true, 00:21:01.156 "nvme_iov_md": false 00:21:01.156 }, 00:21:01.156 "driver_specific": { 00:21:01.156 "nvme": [ 00:21:01.156 { 00:21:01.156 "pci_address": "0000:00:11.0", 00:21:01.156 "trid": { 00:21:01.156 "trtype": "PCIe", 00:21:01.156 "traddr": "0000:00:11.0" 00:21:01.156 }, 00:21:01.156 "ctrlr_data": { 00:21:01.156 "cntlid": 0, 00:21:01.156 "vendor_id": "0x1b36", 00:21:01.156 "model_number": "QEMU NVMe Ctrl", 00:21:01.156 "serial_number": "12341", 00:21:01.156 "firmware_revision": "8.0.0", 00:21:01.156 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:01.156 "oacs": { 00:21:01.156 "security": 0, 00:21:01.156 "format": 1, 00:21:01.156 "firmware": 0, 00:21:01.156 "ns_manage": 1 00:21:01.156 }, 00:21:01.156 "multi_ctrlr": false, 00:21:01.156 "ana_reporting": false 00:21:01.156 }, 00:21:01.156 "vs": { 00:21:01.156 "nvme_version": "1.4" 00:21:01.156 }, 00:21:01.156 "ns_data": { 00:21:01.156 "id": 1, 00:21:01.156 "can_share": false 00:21:01.156 } 00:21:01.156 } 00:21:01.156 ], 00:21:01.156 "mp_policy": "active_passive" 00:21:01.156 } 00:21:01.156 } 00:21:01.156 ]' 00:21:01.156 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:01.414 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:01.414 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:01.414 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:01.414 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:01.415 10:57:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:21:01.415 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:01.415 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:01.415 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:01.415 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:01.415 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:01.673 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ed92a18f-bd9a-45bb-b318-dec84eb16760 00:21:01.673 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:01.673 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ed92a18f-bd9a-45bb-b318-dec84eb16760 00:21:01.673 10:57:50 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:01.932 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=52775c6c-680d-40e9-bd82-11b3e401e392 00:21:01.932 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 52775c6c-680d-40e9-bd82-11b3e401e392 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:02.191 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:02.451 { 00:21:02.451 "name": "b8e27f82-e389-4658-89b8-39dc885ddbeb", 00:21:02.451 "aliases": [ 00:21:02.451 "lvs/nvme0n1p0" 00:21:02.451 ], 00:21:02.451 "product_name": "Logical Volume", 00:21:02.451 "block_size": 4096, 00:21:02.451 "num_blocks": 26476544, 00:21:02.451 "uuid": "b8e27f82-e389-4658-89b8-39dc885ddbeb", 00:21:02.451 "assigned_rate_limits": { 00:21:02.451 "rw_ios_per_sec": 0, 00:21:02.451 "rw_mbytes_per_sec": 0, 00:21:02.451 "r_mbytes_per_sec": 0, 00:21:02.451 "w_mbytes_per_sec": 0 00:21:02.451 }, 00:21:02.451 "claimed": false, 00:21:02.451 "zoned": false, 00:21:02.451 "supported_io_types": { 00:21:02.451 "read": true, 00:21:02.451 "write": true, 00:21:02.451 "unmap": true, 00:21:02.451 "flush": false, 00:21:02.451 "reset": true, 00:21:02.451 "nvme_admin": false, 00:21:02.451 "nvme_io": false, 00:21:02.451 "nvme_io_md": false, 00:21:02.451 "write_zeroes": true, 00:21:02.451 "zcopy": false, 00:21:02.451 "get_zone_info": false, 00:21:02.451 "zone_management": false, 00:21:02.451 "zone_append": false, 00:21:02.451 "compare": false, 00:21:02.451 "compare_and_write": false, 00:21:02.451 "abort": false, 00:21:02.451 "seek_hole": true, 00:21:02.451 "seek_data": true, 00:21:02.451 "copy": false, 00:21:02.451 "nvme_iov_md": false 00:21:02.451 }, 00:21:02.451 "driver_specific": { 00:21:02.451 "lvol": { 00:21:02.451 "lvol_store_uuid": "52775c6c-680d-40e9-bd82-11b3e401e392", 00:21:02.451 "base_bdev": "nvme0n1", 00:21:02.451 "thin_provision": true, 00:21:02.451 "num_allocated_clusters": 0, 00:21:02.451 "snapshot": false, 00:21:02.451 "clone": false, 00:21:02.451 "esnap_clone": false 00:21:02.451 } 00:21:02.451 } 00:21:02.451 } 00:21:02.451 ]' 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:02.451 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:02.710 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:02.710 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:02.710 10:57:51 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.710 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.710 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:02.710 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:02.710 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:02.710 10:57:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:02.969 { 00:21:02.969 "name": "b8e27f82-e389-4658-89b8-39dc885ddbeb", 00:21:02.969 "aliases": [ 00:21:02.969 "lvs/nvme0n1p0" 00:21:02.969 ], 00:21:02.969 "product_name": "Logical Volume", 00:21:02.969 "block_size": 4096, 00:21:02.969 "num_blocks": 26476544, 00:21:02.969 "uuid": "b8e27f82-e389-4658-89b8-39dc885ddbeb", 00:21:02.969 "assigned_rate_limits": { 00:21:02.969 "rw_ios_per_sec": 0, 00:21:02.969 "rw_mbytes_per_sec": 0, 00:21:02.969 "r_mbytes_per_sec": 0, 00:21:02.969 "w_mbytes_per_sec": 0 00:21:02.969 }, 00:21:02.969 "claimed": false, 00:21:02.969 "zoned": false, 00:21:02.969 "supported_io_types": { 00:21:02.969 "read": true, 00:21:02.969 "write": true, 00:21:02.969 "unmap": true, 00:21:02.969 "flush": false, 00:21:02.969 "reset": true, 00:21:02.969 "nvme_admin": false, 00:21:02.969 "nvme_io": false, 00:21:02.969 "nvme_io_md": false, 00:21:02.969 "write_zeroes": true, 00:21:02.969 "zcopy": false, 00:21:02.969 "get_zone_info": false, 00:21:02.969 "zone_management": false, 00:21:02.969 "zone_append": false, 00:21:02.969 "compare": false, 00:21:02.969 "compare_and_write": false, 00:21:02.969 "abort": false, 00:21:02.969 "seek_hole": true, 00:21:02.969 "seek_data": true, 00:21:02.969 "copy": false, 00:21:02.969 "nvme_iov_md": false 00:21:02.969 }, 00:21:02.969 "driver_specific": { 00:21:02.969 "lvol": { 00:21:02.969 "lvol_store_uuid": "52775c6c-680d-40e9-bd82-11b3e401e392", 00:21:02.969 "base_bdev": "nvme0n1", 00:21:02.969 "thin_provision": true, 00:21:02.969 "num_allocated_clusters": 0, 00:21:02.969 "snapshot": false, 00:21:02.969 "clone": false, 00:21:02.969 "esnap_clone": false 00:21:02.969 } 00:21:02.969 } 00:21:02.969 } 00:21:02.969 ]' 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:02.969 10:57:52 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:03.228 10:57:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:21:03.228 10:57:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:03.228 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:03.228 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.228 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:03.228 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:03.228 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b8e27f82-e389-4658-89b8-39dc885ddbeb 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.488 { 00:21:03.488 "name": "b8e27f82-e389-4658-89b8-39dc885ddbeb", 00:21:03.488 "aliases": [ 00:21:03.488 "lvs/nvme0n1p0" 00:21:03.488 ], 00:21:03.488 "product_name": "Logical Volume", 00:21:03.488 "block_size": 4096, 00:21:03.488 "num_blocks": 26476544, 00:21:03.488 "uuid": "b8e27f82-e389-4658-89b8-39dc885ddbeb", 00:21:03.488 "assigned_rate_limits": { 00:21:03.488 "rw_ios_per_sec": 0, 00:21:03.488 "rw_mbytes_per_sec": 0, 00:21:03.488 "r_mbytes_per_sec": 0, 00:21:03.488 "w_mbytes_per_sec": 0 00:21:03.488 }, 00:21:03.488 "claimed": false, 00:21:03.488 "zoned": false, 00:21:03.488 "supported_io_types": { 00:21:03.488 "read": true, 00:21:03.488 "write": true, 00:21:03.488 "unmap": true, 00:21:03.488 "flush": false, 00:21:03.488 "reset": true, 00:21:03.488 "nvme_admin": false, 00:21:03.488 "nvme_io": false, 00:21:03.488 "nvme_io_md": false, 00:21:03.488 "write_zeroes": true, 00:21:03.488 "zcopy": false, 00:21:03.488 "get_zone_info": false, 00:21:03.488 "zone_management": false, 00:21:03.488 "zone_append": false, 00:21:03.488 "compare": false, 00:21:03.488 "compare_and_write": false, 00:21:03.488 "abort": false, 00:21:03.488 "seek_hole": true, 00:21:03.488 "seek_data": true, 00:21:03.488 "copy": false, 00:21:03.488 "nvme_iov_md": false 00:21:03.488 }, 00:21:03.488 "driver_specific": { 00:21:03.488 "lvol": { 00:21:03.488 "lvol_store_uuid": "52775c6c-680d-40e9-bd82-11b3e401e392", 00:21:03.488 "base_bdev": "nvme0n1", 00:21:03.488 "thin_provision": true, 00:21:03.488 "num_allocated_clusters": 0, 00:21:03.488 "snapshot": false, 00:21:03.488 "clone": false, 00:21:03.488 "esnap_clone": false 00:21:03.488 } 00:21:03.488 } 00:21:03.488 } 00:21:03.488 ]' 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:21:03.488 10:57:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b8e27f82-e389-4658-89b8-39dc885ddbeb -c nvc0n1p0 --l2p_dram_limit 20 00:21:03.748 [2024-11-20 10:57:52.822790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.748 [2024-11-20 10:57:52.823015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:03.748 [2024-11-20 10:57:52.823040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:03.748 [2024-11-20 10:57:52.823053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.748 [2024-11-20 10:57:52.823128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.748 [2024-11-20 10:57:52.823145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.748 [2024-11-20 10:57:52.823156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:03.748 [2024-11-20 10:57:52.823169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.748 [2024-11-20 10:57:52.823188] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:03.748 [2024-11-20 10:57:52.824216] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:03.748 [2024-11-20 10:57:52.824243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.748 [2024-11-20 10:57:52.824257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.748 [2024-11-20 10:57:52.824268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:21:03.748 [2024-11-20 10:57:52.824281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.748 [2024-11-20 10:57:52.824419] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4597650e-73b3-438d-be86-2f3c36dbe5c0 00:21:03.748 [2024-11-20 10:57:52.825823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.748 [2024-11-20 10:57:52.825846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:03.748 [2024-11-20 10:57:52.825860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:03.748 [2024-11-20 10:57:52.825872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.748 [2024-11-20 10:57:52.833352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.748 [2024-11-20 10:57:52.833482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.748 [2024-11-20 10:57:52.833563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.453 ms 00:21:03.748 [2024-11-20 10:57:52.833616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.749 [2024-11-20 10:57:52.833763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.749 [2024-11-20 10:57:52.833854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.749 [2024-11-20 10:57:52.833942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:03.749 [2024-11-20 10:57:52.833973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.749 [2024-11-20 10:57:52.834078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.749 [2024-11-20 10:57:52.834113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:03.749 [2024-11-20 10:57:52.834147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:03.749 [2024-11-20 10:57:52.834245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.749 [2024-11-20 10:57:52.834308] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:03.749 [2024-11-20 10:57:52.839435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.749 [2024-11-20 10:57:52.839578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.749 [2024-11-20 10:57:52.839699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.149 ms 00:21:03.749 [2024-11-20 10:57:52.839742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.749 [2024-11-20 10:57:52.839801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.749 [2024-11-20 10:57:52.839882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:03.749 [2024-11-20 10:57:52.839917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:03.749 [2024-11-20 10:57:52.839949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.749 [2024-11-20 10:57:52.840044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:03.749 [2024-11-20 10:57:52.840204] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:03.749 [2024-11-20 10:57:52.840335] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:03.749 [2024-11-20 10:57:52.840393] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:03.749 [2024-11-20 10:57:52.840443] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:03.749 [2024-11-20 10:57:52.840541] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:03.749 [2024-11-20 10:57:52.840603] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:03.749 [2024-11-20 10:57:52.840640] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:03.749 [2024-11-20 10:57:52.840669] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:03.749 [2024-11-20 10:57:52.840701] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:03.749 [2024-11-20 10:57:52.840785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.749 [2024-11-20 10:57:52.840827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:03.749 [2024-11-20 10:57:52.840857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:21:03.749 [2024-11-20 10:57:52.840888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.749 [2024-11-20 10:57:52.840983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.749 [2024-11-20 10:57:52.841059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:03.749 [2024-11-20 10:57:52.841088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:03.749 [2024-11-20 10:57:52.841122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.749 [2024-11-20 10:57:52.841217] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:03.749 [2024-11-20 10:57:52.841291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:03.749 [2024-11-20 10:57:52.841329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.749 [2024-11-20 10:57:52.841361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.749 [2024-11-20 10:57:52.841390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:03.749 [2024-11-20 10:57:52.841420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:03.749 [2024-11-20 10:57:52.841492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:03.749 [2024-11-20 10:57:52.841533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:03.749 [2024-11-20 10:57:52.841618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:03.749 [2024-11-20 10:57:52.841675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.749 [2024-11-20 10:57:52.841739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:03.749 [2024-11-20 10:57:52.841775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:03.749 [2024-11-20 10:57:52.841805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.749 [2024-11-20 10:57:52.841848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:03.749 [2024-11-20 10:57:52.841917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:03.749 [2024-11-20 10:57:52.841956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.749 [2024-11-20 10:57:52.841986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:03.749 [2024-11-20 10:57:52.842018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:03.749 [2024-11-20 10:57:52.842081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:03.749 [2024-11-20 10:57:52.842250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.749 [2024-11-20 10:57:52.842318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:03.749 [2024-11-20 10:57:52.842350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.749 [2024-11-20 10:57:52.842410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:03.749 [2024-11-20 10:57:52.842439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.749 [2024-11-20 10:57:52.842583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:03.749 [2024-11-20 10:57:52.842604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.749 [2024-11-20 10:57:52.842629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:03.749 [2024-11-20 10:57:52.842638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.749 [2024-11-20 10:57:52.842659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:03.749 [2024-11-20 10:57:52.842671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:03.749 [2024-11-20 10:57:52.842680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.749 [2024-11-20 10:57:52.842692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:03.749 [2024-11-20 10:57:52.842701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:03.749 [2024-11-20 10:57:52.842716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:03.749 [2024-11-20 10:57:52.842737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:03.749 [2024-11-20 10:57:52.842746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842757] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:03.749 [2024-11-20 10:57:52.842768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:03.749 [2024-11-20 10:57:52.842780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.749 [2024-11-20 10:57:52.842790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.749 [2024-11-20 10:57:52.842807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:03.749 [2024-11-20 10:57:52.842816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:03.749 [2024-11-20 10:57:52.842828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:03.749 [2024-11-20 10:57:52.842837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:03.749 [2024-11-20 10:57:52.842849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:03.749 [2024-11-20 10:57:52.842858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:03.749 [2024-11-20 10:57:52.842875] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:03.749 [2024-11-20 10:57:52.842889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.749 [2024-11-20 10:57:52.842903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:03.749 [2024-11-20 10:57:52.842914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:03.749 [2024-11-20 10:57:52.842927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:03.749 [2024-11-20 10:57:52.842937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:03.749 [2024-11-20 10:57:52.842949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:03.749 [2024-11-20 10:57:52.842960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:03.749 [2024-11-20 10:57:52.842973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:03.749 [2024-11-20 10:57:52.842984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:03.749 [2024-11-20 10:57:52.842999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:03.749 [2024-11-20 10:57:52.843009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:03.750 [2024-11-20 10:57:52.843022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:03.750 [2024-11-20 10:57:52.843032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:03.750 [2024-11-20 10:57:52.843044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:03.750 [2024-11-20 10:57:52.843054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:03.750 [2024-11-20 10:57:52.843066] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:03.750 [2024-11-20 10:57:52.843078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.750 [2024-11-20 10:57:52.843094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:03.750 [2024-11-20 10:57:52.843105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:03.750 [2024-11-20 10:57:52.843118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:03.750 [2024-11-20 10:57:52.843128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:03.750 [2024-11-20 10:57:52.843142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.750 [2024-11-20 10:57:52.843156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:03.750 [2024-11-20 10:57:52.843169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.977 ms 00:21:03.750 [2024-11-20 10:57:52.843179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.750 [2024-11-20 10:57:52.843225] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:03.750 [2024-11-20 10:57:52.843238] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:07.940 [2024-11-20 10:57:56.599628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.599678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:07.940 [2024-11-20 10:57:56.599701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3762.498 ms 00:21:07.940 [2024-11-20 10:57:56.599712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.636841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.636883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:07.940 [2024-11-20 10:57:56.636900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.877 ms 00:21:07.940 [2024-11-20 10:57:56.636909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.637049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.637061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:07.940 [2024-11-20 10:57:56.637078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:07.940 [2024-11-20 10:57:56.637087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.696626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.696801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:07.940 [2024-11-20 10:57:56.696829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.593 ms 00:21:07.940 [2024-11-20 10:57:56.696839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.696876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.696890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:07.940 [2024-11-20 10:57:56.696902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:07.940 [2024-11-20 10:57:56.696912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.697393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.697407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:07.940 [2024-11-20 10:57:56.697420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:21:07.940 [2024-11-20 10:57:56.697430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.697532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.697544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:07.940 [2024-11-20 10:57:56.697559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:07.940 [2024-11-20 10:57:56.697569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.716773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.716805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:07.940 [2024-11-20 10:57:56.716821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.216 ms 00:21:07.940 [2024-11-20 10:57:56.716831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.728471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:07.940 [2024-11-20 10:57:56.734266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.734301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:07.940 [2024-11-20 10:57:56.734313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.388 ms 00:21:07.940 [2024-11-20 10:57:56.734325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.824197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.824259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:07.940 [2024-11-20 10:57:56.824275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.989 ms 00:21:07.940 [2024-11-20 10:57:56.824287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.824464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.824483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:07.940 [2024-11-20 10:57:56.824494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:21:07.940 [2024-11-20 10:57:56.824506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.859699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.859740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:07.940 [2024-11-20 10:57:56.859753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.199 ms 00:21:07.940 [2024-11-20 10:57:56.859765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.893540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.893579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:07.940 [2024-11-20 10:57:56.893608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.793 ms 00:21:07.940 [2024-11-20 10:57:56.893621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.894332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.940 [2024-11-20 10:57:56.894355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:07.940 [2024-11-20 10:57:56.894366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:21:07.940 [2024-11-20 10:57:56.894378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.940 [2024-11-20 10:57:56.990880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.941 [2024-11-20 10:57:56.990927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:07.941 [2024-11-20 10:57:56.990941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.608 ms 00:21:07.941 [2024-11-20 10:57:56.990954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.941 [2024-11-20 10:57:57.027104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.941 [2024-11-20 10:57:57.027152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:07.941 [2024-11-20 10:57:57.027167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.135 ms 00:21:07.941 [2024-11-20 10:57:57.027183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.941 [2024-11-20 10:57:57.061058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.941 [2024-11-20 10:57:57.061097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:07.941 [2024-11-20 10:57:57.061109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.894 ms 00:21:07.941 [2024-11-20 10:57:57.061120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.941 [2024-11-20 10:57:57.095554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.941 [2024-11-20 10:57:57.095606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:07.941 [2024-11-20 10:57:57.095619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.456 ms 00:21:07.941 [2024-11-20 10:57:57.095647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.941 [2024-11-20 10:57:57.095687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.941 [2024-11-20 10:57:57.095705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:07.941 [2024-11-20 10:57:57.095716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:07.941 [2024-11-20 10:57:57.095728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.941 [2024-11-20 10:57:57.095820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:07.941 [2024-11-20 10:57:57.095835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:07.941 [2024-11-20 10:57:57.095845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:07.941 [2024-11-20 10:57:57.095857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:07.941 [2024-11-20 10:57:57.096902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4280.615 ms, result 0 00:21:07.941 { 00:21:07.941 "name": "ftl0", 00:21:07.941 "uuid": "4597650e-73b3-438d-be86-2f3c36dbe5c0" 00:21:07.941 } 00:21:07.941 10:57:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:07.941 10:57:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:21:07.941 10:57:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:21:08.199 10:57:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:08.199 [2024-11-20 10:57:57.428883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:08.199 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:08.199 Zero copy mechanism will not be used. 00:21:08.199 Running I/O for 4 seconds... 00:21:10.508 1458.00 IOPS, 96.82 MiB/s [2024-11-20T10:58:00.697Z] 1478.00 IOPS, 98.15 MiB/s [2024-11-20T10:58:01.632Z] 1526.67 IOPS, 101.38 MiB/s [2024-11-20T10:58:01.632Z] 1574.50 IOPS, 104.56 MiB/s 00:21:12.379 Latency(us) 00:21:12.379 [2024-11-20T10:58:01.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.379 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:12.379 ftl0 : 4.00 1574.02 104.52 0.00 0.00 670.47 203.16 1960.82 00:21:12.379 [2024-11-20T10:58:01.632Z] =================================================================================================================== 00:21:12.379 [2024-11-20T10:58:01.632Z] Total : 1574.02 104.52 0.00 0.00 670.47 203.16 1960.82 00:21:12.379 { 00:21:12.379 "results": [ 00:21:12.379 { 00:21:12.379 "job": "ftl0", 00:21:12.379 "core_mask": "0x1", 00:21:12.379 "workload": "randwrite", 00:21:12.379 "status": "finished", 00:21:12.379 "queue_depth": 1, 00:21:12.379 "io_size": 69632, 00:21:12.379 "runtime": 4.001849, 00:21:12.379 "iops": 1574.0224081418364, 00:21:12.379 "mibps": 104.52492554066882, 00:21:12.379 "io_failed": 0, 00:21:12.379 "io_timeout": 0, 00:21:12.379 "avg_latency_us": 670.4707949435461, 00:21:12.379 "min_latency_us": 203.1550200803213, 00:21:12.379 "max_latency_us": 1960.816064257028 00:21:12.379 } 00:21:12.379 ], 00:21:12.379 "core_count": 1 00:21:12.379 } 00:21:12.379 [2024-11-20 10:58:01.433363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:12.379 10:58:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:12.379 [2024-11-20 10:58:01.549722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:12.379 Running I/O for 4 seconds... 00:21:14.691 11605.00 IOPS, 45.33 MiB/s [2024-11-20T10:58:04.880Z] 11858.50 IOPS, 46.32 MiB/s [2024-11-20T10:58:05.816Z] 11934.00 IOPS, 46.62 MiB/s [2024-11-20T10:58:05.816Z] 11997.75 IOPS, 46.87 MiB/s 00:21:16.563 Latency(us) 00:21:16.563 [2024-11-20T10:58:05.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.563 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:16.563 ftl0 : 4.01 11984.58 46.81 0.00 0.00 10660.22 218.78 24635.22 00:21:16.563 [2024-11-20T10:58:05.816Z] =================================================================================================================== 00:21:16.563 [2024-11-20T10:58:05.816Z] Total : 11984.58 46.81 0.00 0.00 10660.22 0.00 24635.22 00:21:16.563 [2024-11-20 10:58:05.567472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:16.563 { 00:21:16.563 "results": [ 00:21:16.563 { 00:21:16.563 "job": "ftl0", 00:21:16.563 "core_mask": "0x1", 00:21:16.563 "workload": "randwrite", 00:21:16.563 "status": "finished", 00:21:16.563 "queue_depth": 128, 00:21:16.563 "io_size": 4096, 00:21:16.563 "runtime": 4.014658, 00:21:16.563 "iops": 11984.582497438138, 00:21:16.563 "mibps": 46.814775380617725, 00:21:16.563 "io_failed": 0, 00:21:16.563 "io_timeout": 0, 00:21:16.563 "avg_latency_us": 10660.222999559446, 00:21:16.563 "min_latency_us": 218.78232931726907, 00:21:16.563 "max_latency_us": 24635.219277108434 00:21:16.563 } 00:21:16.563 ], 00:21:16.563 "core_count": 1 00:21:16.563 } 00:21:16.563 10:58:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:16.563 [2024-11-20 10:58:05.702323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:16.563 Running I/O for 4 seconds... 00:21:18.876 9688.00 IOPS, 37.84 MiB/s [2024-11-20T10:58:09.066Z] 9813.50 IOPS, 38.33 MiB/s [2024-11-20T10:58:10.009Z] 9732.67 IOPS, 38.02 MiB/s [2024-11-20T10:58:10.009Z] 9793.00 IOPS, 38.25 MiB/s 00:21:20.757 Latency(us) 00:21:20.757 [2024-11-20T10:58:10.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.757 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:20.757 Verification LBA range: start 0x0 length 0x1400000 00:21:20.757 ftl0 : 4.01 9803.15 38.29 0.00 0.00 13017.81 227.01 18002.66 00:21:20.757 [2024-11-20T10:58:10.010Z] =================================================================================================================== 00:21:20.757 [2024-11-20T10:58:10.010Z] Total : 9803.15 38.29 0.00 0.00 13017.81 0.00 18002.66 00:21:20.757 { 00:21:20.757 "results": [ 00:21:20.757 { 00:21:20.757 "job": "ftl0", 00:21:20.757 "core_mask": "0x1", 00:21:20.757 "workload": "verify", 00:21:20.757 "status": "finished", 00:21:20.757 "verify_range": { 00:21:20.757 "start": 0, 00:21:20.757 "length": 20971520 00:21:20.757 }, 00:21:20.757 "queue_depth": 128, 00:21:20.757 "io_size": 4096, 00:21:20.757 "runtime": 4.008813, 00:21:20.757 "iops": 9803.15120710295, 00:21:20.757 "mibps": 38.2935594027459, 00:21:20.757 "io_failed": 0, 00:21:20.757 "io_timeout": 0, 00:21:20.757 "avg_latency_us": 13017.808156210685, 00:21:20.757 "min_latency_us": 227.00722891566264, 00:21:20.757 "max_latency_us": 18002.660240963854 00:21:20.757 } 00:21:20.757 ], 00:21:20.757 "core_count": 1 00:21:20.757 } 00:21:20.757 [2024-11-20 10:58:09.723447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:20.757 10:58:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:20.757 [2024-11-20 10:58:09.918046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.757 [2024-11-20 10:58:09.918094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:20.757 [2024-11-20 10:58:09.918128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:20.757 [2024-11-20 10:58:09.918140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.757 [2024-11-20 10:58:09.918163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:20.757 [2024-11-20 10:58:09.922165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.757 [2024-11-20 10:58:09.922192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:20.757 [2024-11-20 10:58:09.922207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.989 ms 00:21:20.757 [2024-11-20 10:58:09.922216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.757 [2024-11-20 10:58:09.923994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.757 [2024-11-20 10:58:09.924032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:20.757 [2024-11-20 10:58:09.924048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.750 ms 00:21:20.757 [2024-11-20 10:58:09.924058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.014 [2024-11-20 10:58:10.123846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.014 [2024-11-20 10:58:10.124013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:21.014 [2024-11-20 10:58:10.124044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 200.082 ms 00:21:21.014 [2024-11-20 10:58:10.124055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.014 [2024-11-20 10:58:10.128978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.015 [2024-11-20 10:58:10.129008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:21.015 [2024-11-20 10:58:10.129023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.888 ms 00:21:21.015 [2024-11-20 10:58:10.129032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.015 [2024-11-20 10:58:10.163369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.015 [2024-11-20 10:58:10.163406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:21.015 [2024-11-20 10:58:10.163423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.336 ms 00:21:21.015 [2024-11-20 10:58:10.163433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.015 [2024-11-20 10:58:10.184208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.015 [2024-11-20 10:58:10.184245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:21.015 [2024-11-20 10:58:10.184264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.760 ms 00:21:21.015 [2024-11-20 10:58:10.184274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.015 [2024-11-20 10:58:10.184416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.015 [2024-11-20 10:58:10.184429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:21.015 [2024-11-20 10:58:10.184444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:21.015 [2024-11-20 10:58:10.184453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.015 [2024-11-20 10:58:10.219005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.015 [2024-11-20 10:58:10.219038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:21.015 [2024-11-20 10:58:10.219053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.588 ms 00:21:21.015 [2024-11-20 10:58:10.219062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.015 [2024-11-20 10:58:10.253378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.015 [2024-11-20 10:58:10.253411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:21.015 [2024-11-20 10:58:10.253425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.332 ms 00:21:21.015 [2024-11-20 10:58:10.253450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.274 [2024-11-20 10:58:10.287844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.274 [2024-11-20 10:58:10.287878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:21.274 [2024-11-20 10:58:10.287893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.409 ms 00:21:21.274 [2024-11-20 10:58:10.287901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.274 [2024-11-20 10:58:10.321873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.274 [2024-11-20 10:58:10.322003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:21.274 [2024-11-20 10:58:10.322047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.907 ms 00:21:21.274 [2024-11-20 10:58:10.322057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.274 [2024-11-20 10:58:10.322112] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:21.274 [2024-11-20 10:58:10.322127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:21.274 [2024-11-20 10:58:10.322949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.322962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.322972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.322985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.322996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:21.275 [2024-11-20 10:58:10.323384] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:21.275 [2024-11-20 10:58:10.323397] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4597650e-73b3-438d-be86-2f3c36dbe5c0 00:21:21.275 [2024-11-20 10:58:10.323408] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:21.275 [2024-11-20 10:58:10.323420] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:21.275 [2024-11-20 10:58:10.323432] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:21.275 [2024-11-20 10:58:10.323444] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:21.275 [2024-11-20 10:58:10.323454] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:21.275 [2024-11-20 10:58:10.323466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:21.275 [2024-11-20 10:58:10.323476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:21.275 [2024-11-20 10:58:10.323490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:21.275 [2024-11-20 10:58:10.323499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:21.275 [2024-11-20 10:58:10.323511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.275 [2024-11-20 10:58:10.323521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:21.275 [2024-11-20 10:58:10.323534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:21:21.275 [2024-11-20 10:58:10.323543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.275 [2024-11-20 10:58:10.342787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.275 [2024-11-20 10:58:10.342821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:21.275 [2024-11-20 10:58:10.342835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.224 ms 00:21:21.275 [2024-11-20 10:58:10.342845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.275 [2024-11-20 10:58:10.343423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.275 [2024-11-20 10:58:10.343437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:21.275 [2024-11-20 10:58:10.343450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:21:21.275 [2024-11-20 10:58:10.343466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.275 [2024-11-20 10:58:10.395498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.275 [2024-11-20 10:58:10.395534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:21.275 [2024-11-20 10:58:10.395551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.275 [2024-11-20 10:58:10.395560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.275 [2024-11-20 10:58:10.395624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.275 [2024-11-20 10:58:10.395652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:21.275 [2024-11-20 10:58:10.395665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.275 [2024-11-20 10:58:10.395674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.275 [2024-11-20 10:58:10.395779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.275 [2024-11-20 10:58:10.395796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:21.275 [2024-11-20 10:58:10.395809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.275 [2024-11-20 10:58:10.395819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.275 [2024-11-20 10:58:10.395838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.275 [2024-11-20 10:58:10.395848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:21.275 [2024-11-20 10:58:10.395860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.275 [2024-11-20 10:58:10.395869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.275 [2024-11-20 10:58:10.509264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.275 [2024-11-20 10:58:10.509312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.275 [2024-11-20 10:58:10.509332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.275 [2024-11-20 10:58:10.509342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.534 [2024-11-20 10:58:10.606967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.534 [2024-11-20 10:58:10.607145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.534 [2024-11-20 10:58:10.607172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.534 [2024-11-20 10:58:10.607183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.534 [2024-11-20 10:58:10.607291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.534 [2024-11-20 10:58:10.607303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:21.534 [2024-11-20 10:58:10.607320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.534 [2024-11-20 10:58:10.607330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.534 [2024-11-20 10:58:10.607380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.534 [2024-11-20 10:58:10.607392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:21.534 [2024-11-20 10:58:10.607404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.534 [2024-11-20 10:58:10.607414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.534 [2024-11-20 10:58:10.607532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.534 [2024-11-20 10:58:10.607545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:21.534 [2024-11-20 10:58:10.607565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.534 [2024-11-20 10:58:10.607575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.534 [2024-11-20 10:58:10.607643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.534 [2024-11-20 10:58:10.607656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:21.534 [2024-11-20 10:58:10.607669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.534 [2024-11-20 10:58:10.607679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.534 [2024-11-20 10:58:10.607720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.534 [2024-11-20 10:58:10.607731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:21.534 [2024-11-20 10:58:10.607743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.534 [2024-11-20 10:58:10.607755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.534 [2024-11-20 10:58:10.607799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.534 [2024-11-20 10:58:10.607820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:21.534 [2024-11-20 10:58:10.607834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.534 [2024-11-20 10:58:10.607844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.534 [2024-11-20 10:58:10.607968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 690.997 ms, result 0 00:21:21.534 true 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77522 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77522 ']' 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77522 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77522 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.534 killing process with pid 77522 00:21:21.534 Received shutdown signal, test time was about 4.000000 seconds 00:21:21.534 00:21:21.534 Latency(us) 00:21:21.534 [2024-11-20T10:58:10.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.534 [2024-11-20T10:58:10.787Z] =================================================================================================================== 00:21:21.534 [2024-11-20T10:58:10.787Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77522' 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77522 00:21:21.534 10:58:10 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77522 00:21:24.852 10:58:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:24.853 10:58:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:24.853 Remove shared memory files 00:21:24.853 10:58:14 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:24.853 10:58:14 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:24.853 10:58:14 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:25.128 10:58:14 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:25.128 10:58:14 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:25.128 10:58:14 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:25.128 ************************************ 00:21:25.128 END TEST ftl_bdevperf 00:21:25.128 ************************************ 00:21:25.128 00:21:25.128 real 0m25.402s 00:21:25.128 user 0m27.831s 00:21:25.128 sys 0m1.268s 00:21:25.128 10:58:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.128 10:58:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:25.128 10:58:14 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:25.128 10:58:14 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:25.128 10:58:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.128 10:58:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:25.128 ************************************ 00:21:25.128 START TEST ftl_trim 00:21:25.128 ************************************ 00:21:25.128 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:25.128 * Looking for test storage... 00:21:25.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:25.128 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:25.128 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:21:25.128 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.386 10:58:14 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:25.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.386 --rc genhtml_branch_coverage=1 00:21:25.386 --rc genhtml_function_coverage=1 00:21:25.386 --rc genhtml_legend=1 00:21:25.386 --rc geninfo_all_blocks=1 00:21:25.386 --rc geninfo_unexecuted_blocks=1 00:21:25.386 00:21:25.386 ' 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:25.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.386 --rc genhtml_branch_coverage=1 00:21:25.386 --rc genhtml_function_coverage=1 00:21:25.386 --rc genhtml_legend=1 00:21:25.386 --rc geninfo_all_blocks=1 00:21:25.386 --rc geninfo_unexecuted_blocks=1 00:21:25.386 00:21:25.386 ' 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:25.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.386 --rc genhtml_branch_coverage=1 00:21:25.386 --rc genhtml_function_coverage=1 00:21:25.386 --rc genhtml_legend=1 00:21:25.386 --rc geninfo_all_blocks=1 00:21:25.386 --rc geninfo_unexecuted_blocks=1 00:21:25.386 00:21:25.386 ' 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:25.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.386 --rc genhtml_branch_coverage=1 00:21:25.386 --rc genhtml_function_coverage=1 00:21:25.386 --rc genhtml_legend=1 00:21:25.386 --rc geninfo_all_blocks=1 00:21:25.386 --rc geninfo_unexecuted_blocks=1 00:21:25.386 00:21:25.386 ' 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=77886 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 77886 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77886 ']' 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.386 10:58:14 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:25.386 10:58:14 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:25.386 [2024-11-20 10:58:14.542452] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:25.386 [2024-11-20 10:58:14.542622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77886 ] 00:21:25.644 [2024-11-20 10:58:14.724109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:25.644 [2024-11-20 10:58:14.830224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.644 [2024-11-20 10:58:14.830345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.644 [2024-11-20 10:58:14.830380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.576 10:58:15 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.576 10:58:15 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:26.576 10:58:15 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:26.576 10:58:15 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:26.576 10:58:15 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:26.576 10:58:15 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:26.576 10:58:15 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:26.576 10:58:15 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:26.833 10:58:15 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:26.833 10:58:15 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:26.833 10:58:15 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:26.833 10:58:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:26.833 10:58:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:26.833 10:58:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:26.833 10:58:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:26.833 10:58:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:27.091 10:58:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:27.091 { 00:21:27.091 "name": "nvme0n1", 00:21:27.091 "aliases": [ 00:21:27.091 "c9e28bfb-0af9-4435-bb27-651f6dcc2d5e" 00:21:27.091 ], 00:21:27.091 "product_name": "NVMe disk", 00:21:27.091 "block_size": 4096, 00:21:27.091 "num_blocks": 1310720, 00:21:27.091 "uuid": "c9e28bfb-0af9-4435-bb27-651f6dcc2d5e", 00:21:27.091 "numa_id": -1, 00:21:27.091 "assigned_rate_limits": { 00:21:27.091 "rw_ios_per_sec": 0, 00:21:27.091 "rw_mbytes_per_sec": 0, 00:21:27.091 "r_mbytes_per_sec": 0, 00:21:27.091 "w_mbytes_per_sec": 0 00:21:27.091 }, 00:21:27.091 "claimed": true, 00:21:27.091 "claim_type": "read_many_write_one", 00:21:27.091 "zoned": false, 00:21:27.091 "supported_io_types": { 00:21:27.091 "read": true, 00:21:27.091 "write": true, 00:21:27.091 "unmap": true, 00:21:27.091 "flush": true, 00:21:27.091 "reset": true, 00:21:27.091 "nvme_admin": true, 00:21:27.091 "nvme_io": true, 00:21:27.091 "nvme_io_md": false, 00:21:27.091 "write_zeroes": true, 00:21:27.091 "zcopy": false, 00:21:27.091 "get_zone_info": false, 00:21:27.091 "zone_management": false, 00:21:27.091 "zone_append": false, 00:21:27.091 "compare": true, 00:21:27.091 "compare_and_write": false, 00:21:27.091 "abort": true, 00:21:27.091 "seek_hole": false, 00:21:27.091 "seek_data": false, 00:21:27.091 "copy": true, 00:21:27.091 "nvme_iov_md": false 00:21:27.091 }, 00:21:27.091 "driver_specific": { 00:21:27.091 "nvme": [ 00:21:27.091 { 00:21:27.091 "pci_address": "0000:00:11.0", 00:21:27.091 "trid": { 00:21:27.091 "trtype": "PCIe", 00:21:27.091 "traddr": "0000:00:11.0" 00:21:27.091 }, 00:21:27.091 "ctrlr_data": { 00:21:27.091 "cntlid": 0, 00:21:27.091 "vendor_id": "0x1b36", 00:21:27.091 "model_number": "QEMU NVMe Ctrl", 00:21:27.091 "serial_number": "12341", 00:21:27.091 "firmware_revision": "8.0.0", 00:21:27.091 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:27.091 "oacs": { 00:21:27.091 "security": 0, 00:21:27.091 "format": 1, 00:21:27.091 "firmware": 0, 00:21:27.091 "ns_manage": 1 00:21:27.091 }, 00:21:27.091 "multi_ctrlr": false, 00:21:27.091 "ana_reporting": false 00:21:27.091 }, 00:21:27.091 "vs": { 00:21:27.091 "nvme_version": "1.4" 00:21:27.091 }, 00:21:27.091 "ns_data": { 00:21:27.091 "id": 1, 00:21:27.091 "can_share": false 00:21:27.091 } 00:21:27.091 } 00:21:27.091 ], 00:21:27.091 "mp_policy": "active_passive" 00:21:27.091 } 00:21:27.091 } 00:21:27.091 ]' 00:21:27.091 10:58:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:27.091 10:58:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:27.091 10:58:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:27.091 10:58:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:27.091 10:58:16 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:27.091 10:58:16 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:27.091 10:58:16 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:27.091 10:58:16 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:27.091 10:58:16 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:27.091 10:58:16 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:27.091 10:58:16 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:27.348 10:58:16 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=52775c6c-680d-40e9-bd82-11b3e401e392 00:21:27.348 10:58:16 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:27.348 10:58:16 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52775c6c-680d-40e9-bd82-11b3e401e392 00:21:27.606 10:58:16 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:27.875 10:58:16 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=478a5d05-92b1-476d-ba91-47bab5336237 00:21:27.875 10:58:16 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 478a5d05-92b1-476d-ba91-47bab5336237 00:21:27.875 10:58:17 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:27.875 10:58:17 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:27.875 10:58:17 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:27.875 10:58:17 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:27.875 10:58:17 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:27.875 10:58:17 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:27.875 10:58:17 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:27.875 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:27.875 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:27.875 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:27.875 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:27.875 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:28.144 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:28.144 { 00:21:28.144 "name": "7f088897-8c4f-4027-9ec9-0d9dbaecb30e", 00:21:28.144 "aliases": [ 00:21:28.144 "lvs/nvme0n1p0" 00:21:28.144 ], 00:21:28.144 "product_name": "Logical Volume", 00:21:28.144 "block_size": 4096, 00:21:28.144 "num_blocks": 26476544, 00:21:28.144 "uuid": "7f088897-8c4f-4027-9ec9-0d9dbaecb30e", 00:21:28.144 "assigned_rate_limits": { 00:21:28.144 "rw_ios_per_sec": 0, 00:21:28.144 "rw_mbytes_per_sec": 0, 00:21:28.144 "r_mbytes_per_sec": 0, 00:21:28.144 "w_mbytes_per_sec": 0 00:21:28.144 }, 00:21:28.144 "claimed": false, 00:21:28.144 "zoned": false, 00:21:28.144 "supported_io_types": { 00:21:28.144 "read": true, 00:21:28.144 "write": true, 00:21:28.144 "unmap": true, 00:21:28.144 "flush": false, 00:21:28.144 "reset": true, 00:21:28.144 "nvme_admin": false, 00:21:28.144 "nvme_io": false, 00:21:28.144 "nvme_io_md": false, 00:21:28.144 "write_zeroes": true, 00:21:28.145 "zcopy": false, 00:21:28.145 "get_zone_info": false, 00:21:28.145 "zone_management": false, 00:21:28.145 "zone_append": false, 00:21:28.145 "compare": false, 00:21:28.145 "compare_and_write": false, 00:21:28.145 "abort": false, 00:21:28.145 "seek_hole": true, 00:21:28.145 "seek_data": true, 00:21:28.145 "copy": false, 00:21:28.145 "nvme_iov_md": false 00:21:28.145 }, 00:21:28.145 "driver_specific": { 00:21:28.145 "lvol": { 00:21:28.145 "lvol_store_uuid": "478a5d05-92b1-476d-ba91-47bab5336237", 00:21:28.145 "base_bdev": "nvme0n1", 00:21:28.145 "thin_provision": true, 00:21:28.145 "num_allocated_clusters": 0, 00:21:28.145 "snapshot": false, 00:21:28.145 "clone": false, 00:21:28.145 "esnap_clone": false 00:21:28.145 } 00:21:28.145 } 00:21:28.145 } 00:21:28.145 ]' 00:21:28.145 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:28.145 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:28.145 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:28.145 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:28.145 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:28.145 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:28.145 10:58:17 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:28.145 10:58:17 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:28.145 10:58:17 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:28.403 10:58:17 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:28.403 10:58:17 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:28.403 10:58:17 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:28.403 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:28.403 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:28.403 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:28.403 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:28.403 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:28.661 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:28.661 { 00:21:28.661 "name": "7f088897-8c4f-4027-9ec9-0d9dbaecb30e", 00:21:28.661 "aliases": [ 00:21:28.661 "lvs/nvme0n1p0" 00:21:28.661 ], 00:21:28.661 "product_name": "Logical Volume", 00:21:28.661 "block_size": 4096, 00:21:28.661 "num_blocks": 26476544, 00:21:28.661 "uuid": "7f088897-8c4f-4027-9ec9-0d9dbaecb30e", 00:21:28.661 "assigned_rate_limits": { 00:21:28.661 "rw_ios_per_sec": 0, 00:21:28.661 "rw_mbytes_per_sec": 0, 00:21:28.661 "r_mbytes_per_sec": 0, 00:21:28.661 "w_mbytes_per_sec": 0 00:21:28.661 }, 00:21:28.661 "claimed": false, 00:21:28.661 "zoned": false, 00:21:28.661 "supported_io_types": { 00:21:28.661 "read": true, 00:21:28.661 "write": true, 00:21:28.661 "unmap": true, 00:21:28.661 "flush": false, 00:21:28.661 "reset": true, 00:21:28.661 "nvme_admin": false, 00:21:28.661 "nvme_io": false, 00:21:28.661 "nvme_io_md": false, 00:21:28.661 "write_zeroes": true, 00:21:28.661 "zcopy": false, 00:21:28.661 "get_zone_info": false, 00:21:28.661 "zone_management": false, 00:21:28.661 "zone_append": false, 00:21:28.661 "compare": false, 00:21:28.661 "compare_and_write": false, 00:21:28.661 "abort": false, 00:21:28.661 "seek_hole": true, 00:21:28.661 "seek_data": true, 00:21:28.661 "copy": false, 00:21:28.661 "nvme_iov_md": false 00:21:28.661 }, 00:21:28.661 "driver_specific": { 00:21:28.661 "lvol": { 00:21:28.661 "lvol_store_uuid": "478a5d05-92b1-476d-ba91-47bab5336237", 00:21:28.661 "base_bdev": "nvme0n1", 00:21:28.661 "thin_provision": true, 00:21:28.661 "num_allocated_clusters": 0, 00:21:28.661 "snapshot": false, 00:21:28.661 "clone": false, 00:21:28.661 "esnap_clone": false 00:21:28.661 } 00:21:28.661 } 00:21:28.661 } 00:21:28.661 ]' 00:21:28.661 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:28.661 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:28.661 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:28.661 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:28.661 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:28.661 10:58:17 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:28.661 10:58:17 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:28.661 10:58:17 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:28.919 10:58:18 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:28.919 10:58:18 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:28.919 10:58:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:28.919 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:28.919 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:28.919 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:28.919 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:28.919 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f088897-8c4f-4027-9ec9-0d9dbaecb30e 00:21:29.178 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:29.178 { 00:21:29.178 "name": "7f088897-8c4f-4027-9ec9-0d9dbaecb30e", 00:21:29.178 "aliases": [ 00:21:29.178 "lvs/nvme0n1p0" 00:21:29.178 ], 00:21:29.178 "product_name": "Logical Volume", 00:21:29.178 "block_size": 4096, 00:21:29.178 "num_blocks": 26476544, 00:21:29.178 "uuid": "7f088897-8c4f-4027-9ec9-0d9dbaecb30e", 00:21:29.178 "assigned_rate_limits": { 00:21:29.178 "rw_ios_per_sec": 0, 00:21:29.178 "rw_mbytes_per_sec": 0, 00:21:29.178 "r_mbytes_per_sec": 0, 00:21:29.178 "w_mbytes_per_sec": 0 00:21:29.178 }, 00:21:29.178 "claimed": false, 00:21:29.178 "zoned": false, 00:21:29.178 "supported_io_types": { 00:21:29.178 "read": true, 00:21:29.178 "write": true, 00:21:29.178 "unmap": true, 00:21:29.178 "flush": false, 00:21:29.178 "reset": true, 00:21:29.178 "nvme_admin": false, 00:21:29.178 "nvme_io": false, 00:21:29.178 "nvme_io_md": false, 00:21:29.178 "write_zeroes": true, 00:21:29.178 "zcopy": false, 00:21:29.178 "get_zone_info": false, 00:21:29.178 "zone_management": false, 00:21:29.178 "zone_append": false, 00:21:29.178 "compare": false, 00:21:29.178 "compare_and_write": false, 00:21:29.178 "abort": false, 00:21:29.178 "seek_hole": true, 00:21:29.178 "seek_data": true, 00:21:29.178 "copy": false, 00:21:29.178 "nvme_iov_md": false 00:21:29.178 }, 00:21:29.178 "driver_specific": { 00:21:29.178 "lvol": { 00:21:29.178 "lvol_store_uuid": "478a5d05-92b1-476d-ba91-47bab5336237", 00:21:29.178 "base_bdev": "nvme0n1", 00:21:29.178 "thin_provision": true, 00:21:29.178 "num_allocated_clusters": 0, 00:21:29.178 "snapshot": false, 00:21:29.178 "clone": false, 00:21:29.178 "esnap_clone": false 00:21:29.178 } 00:21:29.178 } 00:21:29.178 } 00:21:29.178 ]' 00:21:29.178 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:29.178 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:29.178 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:29.178 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:29.178 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:29.178 10:58:18 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:29.178 10:58:18 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:29.178 10:58:18 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7f088897-8c4f-4027-9ec9-0d9dbaecb30e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:29.438 [2024-11-20 10:58:18.534898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.438 [2024-11-20 10:58:18.534953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:29.438 [2024-11-20 10:58:18.534971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:29.438 [2024-11-20 10:58:18.534982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.438 [2024-11-20 10:58:18.538185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.538220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:29.439 [2024-11-20 10:58:18.538234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.159 ms 00:21:29.439 [2024-11-20 10:58:18.538244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.538445] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:29.439 [2024-11-20 10:58:18.539427] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:29.439 [2024-11-20 10:58:18.539462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.539472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:29.439 [2024-11-20 10:58:18.539486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:21:29.439 [2024-11-20 10:58:18.539496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.539629] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4b8e0acb-4646-4966-b129-dade0fcf8fcb 00:21:29.439 [2024-11-20 10:58:18.541029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.541059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:29.439 [2024-11-20 10:58:18.541072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:29.439 [2024-11-20 10:58:18.541086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.548573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.548616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:29.439 [2024-11-20 10:58:18.548631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.417 ms 00:21:29.439 [2024-11-20 10:58:18.548645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.548793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.548810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:29.439 [2024-11-20 10:58:18.548821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:29.439 [2024-11-20 10:58:18.548838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.548877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.548891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:29.439 [2024-11-20 10:58:18.548901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:29.439 [2024-11-20 10:58:18.548914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.548951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:29.439 [2024-11-20 10:58:18.554139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.554169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:29.439 [2024-11-20 10:58:18.554187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.200 ms 00:21:29.439 [2024-11-20 10:58:18.554197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.554289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.554302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:29.439 [2024-11-20 10:58:18.554315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:29.439 [2024-11-20 10:58:18.554341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.554380] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:29.439 [2024-11-20 10:58:18.554509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:29.439 [2024-11-20 10:58:18.554529] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:29.439 [2024-11-20 10:58:18.554543] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:29.439 [2024-11-20 10:58:18.554558] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:29.439 [2024-11-20 10:58:18.554570] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:29.439 [2024-11-20 10:58:18.554583] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:29.439 [2024-11-20 10:58:18.554604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:29.439 [2024-11-20 10:58:18.554617] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:29.439 [2024-11-20 10:58:18.554629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:29.439 [2024-11-20 10:58:18.554642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.554652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:29.439 [2024-11-20 10:58:18.554665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:21:29.439 [2024-11-20 10:58:18.554675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.554774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.439 [2024-11-20 10:58:18.554785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:29.439 [2024-11-20 10:58:18.554798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:29.439 [2024-11-20 10:58:18.554807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.439 [2024-11-20 10:58:18.554921] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:29.439 [2024-11-20 10:58:18.554932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:29.439 [2024-11-20 10:58:18.554945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:29.439 [2024-11-20 10:58:18.554955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.439 [2024-11-20 10:58:18.554968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:29.439 [2024-11-20 10:58:18.554977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:29.439 [2024-11-20 10:58:18.554988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:29.439 [2024-11-20 10:58:18.554998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:29.439 [2024-11-20 10:58:18.555011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:29.439 [2024-11-20 10:58:18.555021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:29.439 [2024-11-20 10:58:18.555032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:29.439 [2024-11-20 10:58:18.555041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:29.439 [2024-11-20 10:58:18.555053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:29.439 [2024-11-20 10:58:18.555062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:29.439 [2024-11-20 10:58:18.555074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:29.439 [2024-11-20 10:58:18.555083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.439 [2024-11-20 10:58:18.555097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:29.439 [2024-11-20 10:58:18.555107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:29.439 [2024-11-20 10:58:18.555118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.439 [2024-11-20 10:58:18.555127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:29.439 [2024-11-20 10:58:18.555140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:29.439 [2024-11-20 10:58:18.555150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.439 [2024-11-20 10:58:18.555161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:29.439 [2024-11-20 10:58:18.555170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:29.439 [2024-11-20 10:58:18.555182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.439 [2024-11-20 10:58:18.555191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:29.439 [2024-11-20 10:58:18.555202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:29.439 [2024-11-20 10:58:18.555210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.439 [2024-11-20 10:58:18.555222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:29.440 [2024-11-20 10:58:18.555231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:29.440 [2024-11-20 10:58:18.555242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:29.440 [2024-11-20 10:58:18.555251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:29.440 [2024-11-20 10:58:18.555264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:29.440 [2024-11-20 10:58:18.555273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:29.440 [2024-11-20 10:58:18.555284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:29.440 [2024-11-20 10:58:18.555293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:29.440 [2024-11-20 10:58:18.555305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:29.440 [2024-11-20 10:58:18.555314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:29.440 [2024-11-20 10:58:18.555325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:29.440 [2024-11-20 10:58:18.555334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.440 [2024-11-20 10:58:18.555346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:29.440 [2024-11-20 10:58:18.555356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:29.440 [2024-11-20 10:58:18.555367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.440 [2024-11-20 10:58:18.555376] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:29.440 [2024-11-20 10:58:18.555388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:29.440 [2024-11-20 10:58:18.555397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:29.440 [2024-11-20 10:58:18.555409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.440 [2024-11-20 10:58:18.555419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:29.440 [2024-11-20 10:58:18.555434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:29.440 [2024-11-20 10:58:18.555444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:29.440 [2024-11-20 10:58:18.555456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:29.440 [2024-11-20 10:58:18.555464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:29.440 [2024-11-20 10:58:18.555476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:29.440 [2024-11-20 10:58:18.555490] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:29.440 [2024-11-20 10:58:18.555506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:29.440 [2024-11-20 10:58:18.555517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:29.440 [2024-11-20 10:58:18.555530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:29.440 [2024-11-20 10:58:18.555540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:29.440 [2024-11-20 10:58:18.555553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:29.440 [2024-11-20 10:58:18.555563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:29.440 [2024-11-20 10:58:18.555575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:29.440 [2024-11-20 10:58:18.555585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:29.440 [2024-11-20 10:58:18.555607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:29.440 [2024-11-20 10:58:18.555618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:29.440 [2024-11-20 10:58:18.555637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:29.440 [2024-11-20 10:58:18.555647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:29.440 [2024-11-20 10:58:18.555659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:29.440 [2024-11-20 10:58:18.555669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:29.440 [2024-11-20 10:58:18.555683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:29.440 [2024-11-20 10:58:18.555693] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:29.440 [2024-11-20 10:58:18.555714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:29.440 [2024-11-20 10:58:18.555725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:29.440 [2024-11-20 10:58:18.555739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:29.440 [2024-11-20 10:58:18.555750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:29.440 [2024-11-20 10:58:18.555763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:29.440 [2024-11-20 10:58:18.555773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.440 [2024-11-20 10:58:18.555786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:29.440 [2024-11-20 10:58:18.555796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:21:29.440 [2024-11-20 10:58:18.555808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.440 [2024-11-20 10:58:18.555890] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:29.440 [2024-11-20 10:58:18.555912] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:33.629 [2024-11-20 10:58:22.208071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.208160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:33.629 [2024-11-20 10:58:22.208176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3658.106 ms 00:21:33.629 [2024-11-20 10:58:22.208190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.247224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.247281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.629 [2024-11-20 10:58:22.247296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.633 ms 00:21:33.629 [2024-11-20 10:58:22.247326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.247459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.247475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:33.629 [2024-11-20 10:58:22.247486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:33.629 [2024-11-20 10:58:22.247502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.311301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.311359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.629 [2024-11-20 10:58:22.311378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.846 ms 00:21:33.629 [2024-11-20 10:58:22.311396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.311511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.311531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.629 [2024-11-20 10:58:22.311545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:33.629 [2024-11-20 10:58:22.311561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.312043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.312072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.629 [2024-11-20 10:58:22.312087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:21:33.629 [2024-11-20 10:58:22.312103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.312241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.312257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.629 [2024-11-20 10:58:22.312271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:33.629 [2024-11-20 10:58:22.312298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.333537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.333582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.629 [2024-11-20 10:58:22.333618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.223 ms 00:21:33.629 [2024-11-20 10:58:22.333632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.346188] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:33.629 [2024-11-20 10:58:22.362481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.362542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:33.629 [2024-11-20 10:58:22.362559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.777 ms 00:21:33.629 [2024-11-20 10:58:22.362569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.462228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.462280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:33.629 [2024-11-20 10:58:22.462298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.696 ms 00:21:33.629 [2024-11-20 10:58:22.462317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.462557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.462572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:33.629 [2024-11-20 10:58:22.462589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:21:33.629 [2024-11-20 10:58:22.462610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.499274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.499326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:33.629 [2024-11-20 10:58:22.499344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.682 ms 00:21:33.629 [2024-11-20 10:58:22.499371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.534601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.534637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:33.629 [2024-11-20 10:58:22.534654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.184 ms 00:21:33.629 [2024-11-20 10:58:22.534664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.535386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.535408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:33.629 [2024-11-20 10:58:22.535423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:21:33.629 [2024-11-20 10:58:22.535433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.634922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.634965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:33.629 [2024-11-20 10:58:22.634991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.613 ms 00:21:33.629 [2024-11-20 10:58:22.635002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.672512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.629 [2024-11-20 10:58:22.672548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:33.629 [2024-11-20 10:58:22.672565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.456 ms 00:21:33.629 [2024-11-20 10:58:22.672576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.629 [2024-11-20 10:58:22.707960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.630 [2024-11-20 10:58:22.708006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:33.630 [2024-11-20 10:58:22.708023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.349 ms 00:21:33.630 [2024-11-20 10:58:22.708032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.630 [2024-11-20 10:58:22.743996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.630 [2024-11-20 10:58:22.744030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:33.630 [2024-11-20 10:58:22.744046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.937 ms 00:21:33.630 [2024-11-20 10:58:22.744072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.630 [2024-11-20 10:58:22.744163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.630 [2024-11-20 10:58:22.744178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:33.630 [2024-11-20 10:58:22.744194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:33.630 [2024-11-20 10:58:22.744203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.630 [2024-11-20 10:58:22.744287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.630 [2024-11-20 10:58:22.744298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:33.630 [2024-11-20 10:58:22.744310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:33.630 [2024-11-20 10:58:22.744319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.630 [2024-11-20 10:58:22.745224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:33.630 [2024-11-20 10:58:22.749430] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4216.890 ms, result 0 00:21:33.630 [2024-11-20 10:58:22.750283] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:33.630 { 00:21:33.630 "name": "ftl0", 00:21:33.630 "uuid": "4b8e0acb-4646-4966-b129-dade0fcf8fcb" 00:21:33.630 } 00:21:33.630 10:58:22 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:33.630 10:58:22 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:33.630 10:58:22 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:33.630 10:58:22 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:33.630 10:58:22 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:33.630 10:58:22 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:33.630 10:58:22 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:33.888 10:58:22 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:34.147 [ 00:21:34.147 { 00:21:34.147 "name": "ftl0", 00:21:34.147 "aliases": [ 00:21:34.147 "4b8e0acb-4646-4966-b129-dade0fcf8fcb" 00:21:34.147 ], 00:21:34.147 "product_name": "FTL disk", 00:21:34.147 "block_size": 4096, 00:21:34.147 "num_blocks": 23592960, 00:21:34.147 "uuid": "4b8e0acb-4646-4966-b129-dade0fcf8fcb", 00:21:34.147 "assigned_rate_limits": { 00:21:34.147 "rw_ios_per_sec": 0, 00:21:34.147 "rw_mbytes_per_sec": 0, 00:21:34.147 "r_mbytes_per_sec": 0, 00:21:34.147 "w_mbytes_per_sec": 0 00:21:34.147 }, 00:21:34.147 "claimed": false, 00:21:34.147 "zoned": false, 00:21:34.147 "supported_io_types": { 00:21:34.147 "read": true, 00:21:34.147 "write": true, 00:21:34.147 "unmap": true, 00:21:34.147 "flush": true, 00:21:34.147 "reset": false, 00:21:34.147 "nvme_admin": false, 00:21:34.147 "nvme_io": false, 00:21:34.147 "nvme_io_md": false, 00:21:34.147 "write_zeroes": true, 00:21:34.147 "zcopy": false, 00:21:34.147 "get_zone_info": false, 00:21:34.147 "zone_management": false, 00:21:34.147 "zone_append": false, 00:21:34.147 "compare": false, 00:21:34.147 "compare_and_write": false, 00:21:34.147 "abort": false, 00:21:34.147 "seek_hole": false, 00:21:34.147 "seek_data": false, 00:21:34.147 "copy": false, 00:21:34.147 "nvme_iov_md": false 00:21:34.147 }, 00:21:34.147 "driver_specific": { 00:21:34.147 "ftl": { 00:21:34.147 "base_bdev": "7f088897-8c4f-4027-9ec9-0d9dbaecb30e", 00:21:34.147 "cache": "nvc0n1p0" 00:21:34.147 } 00:21:34.147 } 00:21:34.147 } 00:21:34.147 ] 00:21:34.147 10:58:23 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:34.147 10:58:23 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:34.147 10:58:23 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:34.147 10:58:23 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:34.147 10:58:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:34.406 10:58:23 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:34.406 { 00:21:34.406 "name": "ftl0", 00:21:34.406 "aliases": [ 00:21:34.406 "4b8e0acb-4646-4966-b129-dade0fcf8fcb" 00:21:34.406 ], 00:21:34.406 "product_name": "FTL disk", 00:21:34.406 "block_size": 4096, 00:21:34.406 "num_blocks": 23592960, 00:21:34.406 "uuid": "4b8e0acb-4646-4966-b129-dade0fcf8fcb", 00:21:34.406 "assigned_rate_limits": { 00:21:34.406 "rw_ios_per_sec": 0, 00:21:34.406 "rw_mbytes_per_sec": 0, 00:21:34.406 "r_mbytes_per_sec": 0, 00:21:34.406 "w_mbytes_per_sec": 0 00:21:34.406 }, 00:21:34.406 "claimed": false, 00:21:34.406 "zoned": false, 00:21:34.406 "supported_io_types": { 00:21:34.406 "read": true, 00:21:34.406 "write": true, 00:21:34.406 "unmap": true, 00:21:34.406 "flush": true, 00:21:34.406 "reset": false, 00:21:34.406 "nvme_admin": false, 00:21:34.406 "nvme_io": false, 00:21:34.406 "nvme_io_md": false, 00:21:34.406 "write_zeroes": true, 00:21:34.406 "zcopy": false, 00:21:34.406 "get_zone_info": false, 00:21:34.406 "zone_management": false, 00:21:34.406 "zone_append": false, 00:21:34.406 "compare": false, 00:21:34.406 "compare_and_write": false, 00:21:34.406 "abort": false, 00:21:34.406 "seek_hole": false, 00:21:34.406 "seek_data": false, 00:21:34.406 "copy": false, 00:21:34.406 "nvme_iov_md": false 00:21:34.406 }, 00:21:34.406 "driver_specific": { 00:21:34.406 "ftl": { 00:21:34.406 "base_bdev": "7f088897-8c4f-4027-9ec9-0d9dbaecb30e", 00:21:34.406 "cache": "nvc0n1p0" 00:21:34.406 } 00:21:34.406 } 00:21:34.406 } 00:21:34.406 ]' 00:21:34.406 10:58:23 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:34.406 10:58:23 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:34.406 10:58:23 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:34.665 [2024-11-20 10:58:23.797457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.665 [2024-11-20 10:58:23.797504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:34.665 [2024-11-20 10:58:23.797523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.665 [2024-11-20 10:58:23.797539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.665 [2024-11-20 10:58:23.797574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:34.665 [2024-11-20 10:58:23.801873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.665 [2024-11-20 10:58:23.801901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:34.665 [2024-11-20 10:58:23.801922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:21:34.665 [2024-11-20 10:58:23.801933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.665 [2024-11-20 10:58:23.802429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.665 [2024-11-20 10:58:23.802446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:34.665 [2024-11-20 10:58:23.802460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:21:34.665 [2024-11-20 10:58:23.802469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.665 [2024-11-20 10:58:23.805311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.665 [2024-11-20 10:58:23.805334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:34.665 [2024-11-20 10:58:23.805348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:21:34.665 [2024-11-20 10:58:23.805358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.665 [2024-11-20 10:58:23.810950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.666 [2024-11-20 10:58:23.810981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:34.666 [2024-11-20 10:58:23.810995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.544 ms 00:21:34.666 [2024-11-20 10:58:23.811005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.666 [2024-11-20 10:58:23.847359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.666 [2024-11-20 10:58:23.847410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:34.666 [2024-11-20 10:58:23.847431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.329 ms 00:21:34.666 [2024-11-20 10:58:23.847441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.666 [2024-11-20 10:58:23.869039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.666 [2024-11-20 10:58:23.869073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:34.666 [2024-11-20 10:58:23.869089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.544 ms 00:21:34.666 [2024-11-20 10:58:23.869102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.666 [2024-11-20 10:58:23.869320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.666 [2024-11-20 10:58:23.869334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:34.666 [2024-11-20 10:58:23.869347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:21:34.666 [2024-11-20 10:58:23.869357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.666 [2024-11-20 10:58:23.905115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.666 [2024-11-20 10:58:23.905148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:34.666 [2024-11-20 10:58:23.905164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.777 ms 00:21:34.666 [2024-11-20 10:58:23.905174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.926 [2024-11-20 10:58:23.940777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.926 [2024-11-20 10:58:23.940809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:34.926 [2024-11-20 10:58:23.940827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.557 ms 00:21:34.926 [2024-11-20 10:58:23.940836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.926 [2024-11-20 10:58:23.976198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.926 [2024-11-20 10:58:23.976231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:34.926 [2024-11-20 10:58:23.976246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.320 ms 00:21:34.926 [2024-11-20 10:58:23.976255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.926 [2024-11-20 10:58:24.010942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.926 [2024-11-20 10:58:24.010975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:34.926 [2024-11-20 10:58:24.010990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.604 ms 00:21:34.926 [2024-11-20 10:58:24.010999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.926 [2024-11-20 10:58:24.011082] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:34.926 [2024-11-20 10:58:24.011098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:34.926 [2024-11-20 10:58:24.011303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.011999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:34.927 [2024-11-20 10:58:24.012347] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:34.927 [2024-11-20 10:58:24.012362] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b8e0acb-4646-4966-b129-dade0fcf8fcb 00:21:34.927 [2024-11-20 10:58:24.012373] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:34.927 [2024-11-20 10:58:24.012385] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:34.927 [2024-11-20 10:58:24.012395] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:34.927 [2024-11-20 10:58:24.012407] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:34.927 [2024-11-20 10:58:24.012420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:34.927 [2024-11-20 10:58:24.012432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:34.928 [2024-11-20 10:58:24.012443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:34.928 [2024-11-20 10:58:24.012454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:34.928 [2024-11-20 10:58:24.012463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:34.928 [2024-11-20 10:58:24.012475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.928 [2024-11-20 10:58:24.012485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:34.928 [2024-11-20 10:58:24.012498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.398 ms 00:21:34.928 [2024-11-20 10:58:24.012508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.928 [2024-11-20 10:58:24.032684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.928 [2024-11-20 10:58:24.032714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:34.928 [2024-11-20 10:58:24.032734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.172 ms 00:21:34.928 [2024-11-20 10:58:24.032745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.928 [2024-11-20 10:58:24.033364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.928 [2024-11-20 10:58:24.033379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:34.928 [2024-11-20 10:58:24.033393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:21:34.928 [2024-11-20 10:58:24.033402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.928 [2024-11-20 10:58:24.103372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.928 [2024-11-20 10:58:24.103407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.928 [2024-11-20 10:58:24.103422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.928 [2024-11-20 10:58:24.103432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.928 [2024-11-20 10:58:24.103541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.928 [2024-11-20 10:58:24.103553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.928 [2024-11-20 10:58:24.103565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.928 [2024-11-20 10:58:24.103575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.928 [2024-11-20 10:58:24.103669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.928 [2024-11-20 10:58:24.103682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.928 [2024-11-20 10:58:24.103702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.928 [2024-11-20 10:58:24.103712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.928 [2024-11-20 10:58:24.103746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.928 [2024-11-20 10:58:24.103757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.928 [2024-11-20 10:58:24.103770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.928 [2024-11-20 10:58:24.103780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.233238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.187 [2024-11-20 10:58:24.233287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:35.187 [2024-11-20 10:58:24.233304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.187 [2024-11-20 10:58:24.233314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.333145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.187 [2024-11-20 10:58:24.333190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:35.187 [2024-11-20 10:58:24.333222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.187 [2024-11-20 10:58:24.333234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.333352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.187 [2024-11-20 10:58:24.333365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:35.187 [2024-11-20 10:58:24.333396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.187 [2024-11-20 10:58:24.333410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.333472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.187 [2024-11-20 10:58:24.333482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:35.187 [2024-11-20 10:58:24.333495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.187 [2024-11-20 10:58:24.333504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.333649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.187 [2024-11-20 10:58:24.333663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:35.187 [2024-11-20 10:58:24.333676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.187 [2024-11-20 10:58:24.333686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.333751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.187 [2024-11-20 10:58:24.333763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:35.187 [2024-11-20 10:58:24.333777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.187 [2024-11-20 10:58:24.333787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.333846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.187 [2024-11-20 10:58:24.333857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:35.187 [2024-11-20 10:58:24.333872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.187 [2024-11-20 10:58:24.333883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.333947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:35.187 [2024-11-20 10:58:24.333959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:35.187 [2024-11-20 10:58:24.333971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:35.187 [2024-11-20 10:58:24.333980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.187 [2024-11-20 10:58:24.334159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 537.554 ms, result 0 00:21:35.187 true 00:21:35.187 10:58:24 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 77886 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77886 ']' 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77886 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77886 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.187 killing process with pid 77886 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77886' 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77886 00:21:35.187 10:58:24 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77886 00:21:39.397 10:58:27 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:39.654 65536+0 records in 00:21:39.654 65536+0 records out 00:21:39.654 268435456 bytes (268 MB, 256 MiB) copied, 0.952955 s, 282 MB/s 00:21:39.654 10:58:28 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:39.654 [2024-11-20 10:58:28.822585] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:39.655 [2024-11-20 10:58:28.822709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78088 ] 00:21:39.913 [2024-11-20 10:58:28.999068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.913 [2024-11-20 10:58:29.100254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.479 [2024-11-20 10:58:29.433214] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:40.479 [2024-11-20 10:58:29.433280] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:40.479 [2024-11-20 10:58:29.595023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.479 [2024-11-20 10:58:29.595068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:40.479 [2024-11-20 10:58:29.595083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:40.479 [2024-11-20 10:58:29.595108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.479 [2024-11-20 10:58:29.598232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.479 [2024-11-20 10:58:29.598372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:40.479 [2024-11-20 10:58:29.598409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.109 ms 00:21:40.479 [2024-11-20 10:58:29.598419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.479 [2024-11-20 10:58:29.598519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:40.479 [2024-11-20 10:58:29.599460] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:40.479 [2024-11-20 10:58:29.599487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.479 [2024-11-20 10:58:29.599497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:40.479 [2024-11-20 10:58:29.599508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:21:40.479 [2024-11-20 10:58:29.599518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.479 [2024-11-20 10:58:29.601008] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:40.479 [2024-11-20 10:58:29.619800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.619944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:40.480 [2024-11-20 10:58:29.619966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.823 ms 00:21:40.480 [2024-11-20 10:58:29.619977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.620105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.620120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:40.480 [2024-11-20 10:58:29.620131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:40.480 [2024-11-20 10:58:29.620141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.626807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.626939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:40.480 [2024-11-20 10:58:29.626958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.636 ms 00:21:40.480 [2024-11-20 10:58:29.626969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.627075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.627088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:40.480 [2024-11-20 10:58:29.627099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:40.480 [2024-11-20 10:58:29.627109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.627136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.627150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:40.480 [2024-11-20 10:58:29.627160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:40.480 [2024-11-20 10:58:29.627170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.627192] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:40.480 [2024-11-20 10:58:29.632059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.632091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:40.480 [2024-11-20 10:58:29.632103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:21:40.480 [2024-11-20 10:58:29.632114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.632179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.632191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:40.480 [2024-11-20 10:58:29.632202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:40.480 [2024-11-20 10:58:29.632212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.632232] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:40.480 [2024-11-20 10:58:29.632257] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:40.480 [2024-11-20 10:58:29.632290] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:40.480 [2024-11-20 10:58:29.632308] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:40.480 [2024-11-20 10:58:29.632402] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:40.480 [2024-11-20 10:58:29.632416] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:40.480 [2024-11-20 10:58:29.632430] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:40.480 [2024-11-20 10:58:29.632442] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:40.480 [2024-11-20 10:58:29.632458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:40.480 [2024-11-20 10:58:29.632469] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:40.480 [2024-11-20 10:58:29.632480] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:40.480 [2024-11-20 10:58:29.632489] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:40.480 [2024-11-20 10:58:29.632499] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:40.480 [2024-11-20 10:58:29.632509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.632519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:40.480 [2024-11-20 10:58:29.632529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:21:40.480 [2024-11-20 10:58:29.632539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.632629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.480 [2024-11-20 10:58:29.632641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:40.480 [2024-11-20 10:58:29.632655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:40.480 [2024-11-20 10:58:29.632665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.480 [2024-11-20 10:58:29.632755] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:40.480 [2024-11-20 10:58:29.632768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:40.480 [2024-11-20 10:58:29.632779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:40.480 [2024-11-20 10:58:29.632789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.480 [2024-11-20 10:58:29.632800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:40.480 [2024-11-20 10:58:29.632809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:40.480 [2024-11-20 10:58:29.632818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:40.480 [2024-11-20 10:58:29.632828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:40.480 [2024-11-20 10:58:29.632837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:40.480 [2024-11-20 10:58:29.632847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:40.480 [2024-11-20 10:58:29.632856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:40.480 [2024-11-20 10:58:29.632866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:40.480 [2024-11-20 10:58:29.632874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:40.480 [2024-11-20 10:58:29.632916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:40.480 [2024-11-20 10:58:29.632925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:40.480 [2024-11-20 10:58:29.632935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.480 [2024-11-20 10:58:29.632943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:40.480 [2024-11-20 10:58:29.632952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:40.480 [2024-11-20 10:58:29.632961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.480 [2024-11-20 10:58:29.632970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:40.480 [2024-11-20 10:58:29.632980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:40.480 [2024-11-20 10:58:29.632989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.480 [2024-11-20 10:58:29.632998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:40.480 [2024-11-20 10:58:29.633007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:40.480 [2024-11-20 10:58:29.633015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.480 [2024-11-20 10:58:29.633025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:40.480 [2024-11-20 10:58:29.633033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:40.480 [2024-11-20 10:58:29.633042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.480 [2024-11-20 10:58:29.633051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:40.480 [2024-11-20 10:58:29.633060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:40.480 [2024-11-20 10:58:29.633068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:40.480 [2024-11-20 10:58:29.633077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:40.480 [2024-11-20 10:58:29.633086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:40.480 [2024-11-20 10:58:29.633095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:40.480 [2024-11-20 10:58:29.633104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:40.480 [2024-11-20 10:58:29.633113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:40.480 [2024-11-20 10:58:29.633121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:40.480 [2024-11-20 10:58:29.633131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:40.480 [2024-11-20 10:58:29.633139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:40.480 [2024-11-20 10:58:29.633148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.480 [2024-11-20 10:58:29.633157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:40.480 [2024-11-20 10:58:29.633166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:40.480 [2024-11-20 10:58:29.633176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.480 [2024-11-20 10:58:29.633185] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:40.480 [2024-11-20 10:58:29.633194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:40.480 [2024-11-20 10:58:29.633204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:40.480 [2024-11-20 10:58:29.633217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:40.480 [2024-11-20 10:58:29.633227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:40.480 [2024-11-20 10:58:29.633237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:40.480 [2024-11-20 10:58:29.633245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:40.480 [2024-11-20 10:58:29.633255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:40.480 [2024-11-20 10:58:29.633264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:40.480 [2024-11-20 10:58:29.633273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:40.480 [2024-11-20 10:58:29.633283] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:40.480 [2024-11-20 10:58:29.633295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:40.481 [2024-11-20 10:58:29.633306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:40.481 [2024-11-20 10:58:29.633316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:40.481 [2024-11-20 10:58:29.633326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:40.481 [2024-11-20 10:58:29.633336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:40.481 [2024-11-20 10:58:29.633346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:40.481 [2024-11-20 10:58:29.633356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:40.481 [2024-11-20 10:58:29.633367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:40.481 [2024-11-20 10:58:29.633377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:40.481 [2024-11-20 10:58:29.633387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:40.481 [2024-11-20 10:58:29.633396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:40.481 [2024-11-20 10:58:29.633407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:40.481 [2024-11-20 10:58:29.633417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:40.481 [2024-11-20 10:58:29.633428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:40.481 [2024-11-20 10:58:29.633438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:40.481 [2024-11-20 10:58:29.633449] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:40.481 [2024-11-20 10:58:29.633460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:40.481 [2024-11-20 10:58:29.633470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:40.481 [2024-11-20 10:58:29.633481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:40.481 [2024-11-20 10:58:29.633490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:40.481 [2024-11-20 10:58:29.633500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:40.481 [2024-11-20 10:58:29.633511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.481 [2024-11-20 10:58:29.633521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:40.481 [2024-11-20 10:58:29.633534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:21:40.481 [2024-11-20 10:58:29.633544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.481 [2024-11-20 10:58:29.671936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.481 [2024-11-20 10:58:29.671970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:40.481 [2024-11-20 10:58:29.671983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.390 ms 00:21:40.481 [2024-11-20 10:58:29.671992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.481 [2024-11-20 10:58:29.672101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.481 [2024-11-20 10:58:29.672120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:40.481 [2024-11-20 10:58:29.672131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:40.481 [2024-11-20 10:58:29.672140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.747257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.747294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:40.740 [2024-11-20 10:58:29.747307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.217 ms 00:21:40.740 [2024-11-20 10:58:29.747321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.747429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.747443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:40.740 [2024-11-20 10:58:29.747454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:40.740 [2024-11-20 10:58:29.747464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.747937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.747951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:40.740 [2024-11-20 10:58:29.747962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:21:40.740 [2024-11-20 10:58:29.747977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.748090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.748119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:40.740 [2024-11-20 10:58:29.748130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:40.740 [2024-11-20 10:58:29.748139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.766749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.766784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:40.740 [2024-11-20 10:58:29.766797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.618 ms 00:21:40.740 [2024-11-20 10:58:29.766823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.784541] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:40.740 [2024-11-20 10:58:29.784579] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:40.740 [2024-11-20 10:58:29.784604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.784615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:40.740 [2024-11-20 10:58:29.784627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.699 ms 00:21:40.740 [2024-11-20 10:58:29.784652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.812536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.812692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:40.740 [2024-11-20 10:58:29.812741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.833 ms 00:21:40.740 [2024-11-20 10:58:29.812753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.829956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.829991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:40.740 [2024-11-20 10:58:29.830003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.111 ms 00:21:40.740 [2024-11-20 10:58:29.830012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.847581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.847627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:40.740 [2024-11-20 10:58:29.847640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.507 ms 00:21:40.740 [2024-11-20 10:58:29.847649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.848330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.848352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:40.740 [2024-11-20 10:58:29.848363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:21:40.740 [2024-11-20 10:58:29.848372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.929969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.930233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:40.740 [2024-11-20 10:58:29.930260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.704 ms 00:21:40.740 [2024-11-20 10:58:29.930271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.940862] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:40.740 [2024-11-20 10:58:29.955951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.955993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:40.740 [2024-11-20 10:58:29.956008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.632 ms 00:21:40.740 [2024-11-20 10:58:29.956018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.956129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.956145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:40.740 [2024-11-20 10:58:29.956156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:40.740 [2024-11-20 10:58:29.956166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.956215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.956226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:40.740 [2024-11-20 10:58:29.956236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:40.740 [2024-11-20 10:58:29.956246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.956270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.956281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:40.740 [2024-11-20 10:58:29.956293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:40.740 [2024-11-20 10:58:29.956302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.740 [2024-11-20 10:58:29.956337] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:40.740 [2024-11-20 10:58:29.956348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.740 [2024-11-20 10:58:29.956358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:40.740 [2024-11-20 10:58:29.956367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:40.740 [2024-11-20 10:58:29.956376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.999 [2024-11-20 10:58:29.991279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.999 [2024-11-20 10:58:29.991325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:40.999 [2024-11-20 10:58:29.991339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.939 ms 00:21:40.999 [2024-11-20 10:58:29.991350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.999 [2024-11-20 10:58:29.991463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:40.999 [2024-11-20 10:58:29.991476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:40.999 [2024-11-20 10:58:29.991488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:40.999 [2024-11-20 10:58:29.991497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:40.999 [2024-11-20 10:58:29.992352] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:40.999 [2024-11-20 10:58:29.996491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.692 ms, result 0 00:21:40.999 [2024-11-20 10:58:29.997428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:40.999 [2024-11-20 10:58:30.016022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:41.936  [2024-11-20T10:58:32.125Z] Copying: 23/256 [MB] (23 MBps) [2024-11-20T10:58:33.059Z] Copying: 47/256 [MB] (24 MBps) [2024-11-20T10:58:34.434Z] Copying: 72/256 [MB] (24 MBps) [2024-11-20T10:58:35.368Z] Copying: 98/256 [MB] (25 MBps) [2024-11-20T10:58:36.302Z] Copying: 122/256 [MB] (24 MBps) [2024-11-20T10:58:37.236Z] Copying: 147/256 [MB] (24 MBps) [2024-11-20T10:58:38.173Z] Copying: 171/256 [MB] (24 MBps) [2024-11-20T10:58:39.109Z] Copying: 196/256 [MB] (24 MBps) [2024-11-20T10:58:40.065Z] Copying: 220/256 [MB] (24 MBps) [2024-11-20T10:58:40.633Z] Copying: 244/256 [MB] (24 MBps) [2024-11-20T10:58:40.633Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-20 10:58:40.490633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:51.380 [2024-11-20 10:58:40.504680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.504727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:51.380 [2024-11-20 10:58:40.504742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:51.380 [2024-11-20 10:58:40.504752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.380 [2024-11-20 10:58:40.504772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:51.380 [2024-11-20 10:58:40.508813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.508847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:51.380 [2024-11-20 10:58:40.508859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.032 ms 00:21:51.380 [2024-11-20 10:58:40.508868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.380 [2024-11-20 10:58:40.510690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.510725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:51.380 [2024-11-20 10:58:40.510738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.803 ms 00:21:51.380 [2024-11-20 10:58:40.510748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.380 [2024-11-20 10:58:40.517294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.517437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:51.380 [2024-11-20 10:58:40.517465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.539 ms 00:21:51.380 [2024-11-20 10:58:40.517476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.380 [2024-11-20 10:58:40.523156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.523191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:51.380 [2024-11-20 10:58:40.523202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.603 ms 00:21:51.380 [2024-11-20 10:58:40.523213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.380 [2024-11-20 10:58:40.559741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.559778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:51.380 [2024-11-20 10:58:40.559792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.541 ms 00:21:51.380 [2024-11-20 10:58:40.559818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.380 [2024-11-20 10:58:40.580282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.580320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:51.380 [2024-11-20 10:58:40.580339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.442 ms 00:21:51.380 [2024-11-20 10:58:40.580353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.380 [2024-11-20 10:58:40.580476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.580489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:51.380 [2024-11-20 10:58:40.580500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:51.380 [2024-11-20 10:58:40.580510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.380 [2024-11-20 10:58:40.615920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.380 [2024-11-20 10:58:40.615953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:51.380 [2024-11-20 10:58:40.615965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.451 ms 00:21:51.380 [2024-11-20 10:58:40.615973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.640 [2024-11-20 10:58:40.651073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.640 [2024-11-20 10:58:40.651108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:51.640 [2024-11-20 10:58:40.651120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.088 ms 00:21:51.640 [2024-11-20 10:58:40.651130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.640 [2024-11-20 10:58:40.685866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.640 [2024-11-20 10:58:40.685902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:51.640 [2024-11-20 10:58:40.685914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.717 ms 00:21:51.640 [2024-11-20 10:58:40.685938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.640 [2024-11-20 10:58:40.719317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.640 [2024-11-20 10:58:40.719443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:51.640 [2024-11-20 10:58:40.719479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.355 ms 00:21:51.640 [2024-11-20 10:58:40.719489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.640 [2024-11-20 10:58:40.719569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:51.640 [2024-11-20 10:58:40.719591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:51.640 [2024-11-20 10:58:40.719790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.719997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:51.641 [2024-11-20 10:58:40.720530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:51.642 [2024-11-20 10:58:40.720691] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:51.642 [2024-11-20 10:58:40.720701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b8e0acb-4646-4966-b129-dade0fcf8fcb 00:21:51.642 [2024-11-20 10:58:40.720711] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:51.642 [2024-11-20 10:58:40.720721] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:51.642 [2024-11-20 10:58:40.720730] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:51.642 [2024-11-20 10:58:40.720740] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:51.642 [2024-11-20 10:58:40.720749] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:51.642 [2024-11-20 10:58:40.720759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:51.642 [2024-11-20 10:58:40.720768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:51.642 [2024-11-20 10:58:40.720777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:51.642 [2024-11-20 10:58:40.720786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:51.642 [2024-11-20 10:58:40.720795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.642 [2024-11-20 10:58:40.720805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:51.642 [2024-11-20 10:58:40.720819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:21:51.642 [2024-11-20 10:58:40.720828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.642 [2024-11-20 10:58:40.739674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.642 [2024-11-20 10:58:40.739706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:51.642 [2024-11-20 10:58:40.739718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.857 ms 00:21:51.642 [2024-11-20 10:58:40.739727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.642 [2024-11-20 10:58:40.740312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.642 [2024-11-20 10:58:40.740333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:51.642 [2024-11-20 10:58:40.740344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:21:51.642 [2024-11-20 10:58:40.740353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.642 [2024-11-20 10:58:40.792485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.642 [2024-11-20 10:58:40.792518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.642 [2024-11-20 10:58:40.792531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.642 [2024-11-20 10:58:40.792541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.642 [2024-11-20 10:58:40.792659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.642 [2024-11-20 10:58:40.792675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.642 [2024-11-20 10:58:40.792686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.642 [2024-11-20 10:58:40.792696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.642 [2024-11-20 10:58:40.792759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.642 [2024-11-20 10:58:40.792772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.642 [2024-11-20 10:58:40.792782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.642 [2024-11-20 10:58:40.792792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.642 [2024-11-20 10:58:40.792810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.642 [2024-11-20 10:58:40.792820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.642 [2024-11-20 10:58:40.792833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.642 [2024-11-20 10:58:40.792843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:40.909360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.901 [2024-11-20 10:58:40.909576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.901 [2024-11-20 10:58:40.909614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.901 [2024-11-20 10:58:40.909626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:41.005645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.901 [2024-11-20 10:58:41.005686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.901 [2024-11-20 10:58:41.005705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.901 [2024-11-20 10:58:41.005715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:41.005775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.901 [2024-11-20 10:58:41.005786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:51.901 [2024-11-20 10:58:41.005796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.901 [2024-11-20 10:58:41.005805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:41.005831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.901 [2024-11-20 10:58:41.005841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:51.901 [2024-11-20 10:58:41.005851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.901 [2024-11-20 10:58:41.005863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:41.005975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.901 [2024-11-20 10:58:41.005988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:51.901 [2024-11-20 10:58:41.005997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.901 [2024-11-20 10:58:41.006006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:41.006039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.901 [2024-11-20 10:58:41.006050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:51.901 [2024-11-20 10:58:41.006059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.901 [2024-11-20 10:58:41.006068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:41.006108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.901 [2024-11-20 10:58:41.006119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:51.901 [2024-11-20 10:58:41.006129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.901 [2024-11-20 10:58:41.006138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:41.006179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:51.901 [2024-11-20 10:58:41.006190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:51.901 [2024-11-20 10:58:41.006199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:51.901 [2024-11-20 10:58:41.006211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.901 [2024-11-20 10:58:41.006334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 502.461 ms, result 0 00:21:53.277 00:21:53.277 00:21:53.277 10:58:42 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78229 00:21:53.277 10:58:42 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:21:53.277 10:58:42 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78229 00:21:53.277 10:58:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78229 ']' 00:21:53.277 10:58:42 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.277 10:58:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.277 10:58:42 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.277 10:58:42 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.277 10:58:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:53.277 [2024-11-20 10:58:42.253131] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:53.278 [2024-11-20 10:58:42.253479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78229 ] 00:21:53.278 [2024-11-20 10:58:42.431677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.278 [2024-11-20 10:58:42.527713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.272 10:58:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.272 10:58:43 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:54.272 10:58:43 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:21:54.531 [2024-11-20 10:58:43.560720] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:54.531 [2024-11-20 10:58:43.560779] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:54.531 [2024-11-20 10:58:43.739930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.739976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:54.531 [2024-11-20 10:58:43.739997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:54.531 [2024-11-20 10:58:43.740006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.743718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.743876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:54.531 [2024-11-20 10:58:43.743902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:21:54.531 [2024-11-20 10:58:43.743912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.744063] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:54.531 [2024-11-20 10:58:43.745032] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:54.531 [2024-11-20 10:58:43.745059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.745081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:54.531 [2024-11-20 10:58:43.745093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:21:54.531 [2024-11-20 10:58:43.745103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.746735] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:54.531 [2024-11-20 10:58:43.765092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.765138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:54.531 [2024-11-20 10:58:43.765152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.391 ms 00:21:54.531 [2024-11-20 10:58:43.765166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.765259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.765277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:54.531 [2024-11-20 10:58:43.765288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:54.531 [2024-11-20 10:58:43.765301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.771915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.771955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:54.531 [2024-11-20 10:58:43.771967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.572 ms 00:21:54.531 [2024-11-20 10:58:43.771982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.772107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.772126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:54.531 [2024-11-20 10:58:43.772136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:21:54.531 [2024-11-20 10:58:43.772151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.772189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.772204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:54.531 [2024-11-20 10:58:43.772214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:54.531 [2024-11-20 10:58:43.772227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.772250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:54.531 [2024-11-20 10:58:43.776860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.776892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:54.531 [2024-11-20 10:58:43.776907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:21:54.531 [2024-11-20 10:58:43.776934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.777008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.777020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:54.531 [2024-11-20 10:58:43.777036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:54.531 [2024-11-20 10:58:43.777051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.777078] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:54.531 [2024-11-20 10:58:43.777101] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:54.531 [2024-11-20 10:58:43.777151] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:54.531 [2024-11-20 10:58:43.777170] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:54.531 [2024-11-20 10:58:43.777260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:54.531 [2024-11-20 10:58:43.777273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:54.531 [2024-11-20 10:58:43.777293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:54.531 [2024-11-20 10:58:43.777310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:54.531 [2024-11-20 10:58:43.777327] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:54.531 [2024-11-20 10:58:43.777338] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:54.531 [2024-11-20 10:58:43.777352] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:54.531 [2024-11-20 10:58:43.777362] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:54.531 [2024-11-20 10:58:43.777381] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:54.531 [2024-11-20 10:58:43.777391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.777406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:54.531 [2024-11-20 10:58:43.777417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:21:54.531 [2024-11-20 10:58:43.777431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.777509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.531 [2024-11-20 10:58:43.777525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:54.531 [2024-11-20 10:58:43.777535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:54.531 [2024-11-20 10:58:43.777556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.531 [2024-11-20 10:58:43.777662] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:54.531 [2024-11-20 10:58:43.777681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:54.531 [2024-11-20 10:58:43.777692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.531 [2024-11-20 10:58:43.777707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.531 [2024-11-20 10:58:43.777718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:54.531 [2024-11-20 10:58:43.777731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:54.531 [2024-11-20 10:58:43.777741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:54.531 [2024-11-20 10:58:43.777761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:54.531 [2024-11-20 10:58:43.777770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:54.531 [2024-11-20 10:58:43.777784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.531 [2024-11-20 10:58:43.777793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:54.531 [2024-11-20 10:58:43.777808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:54.531 [2024-11-20 10:58:43.777818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:54.531 [2024-11-20 10:58:43.777831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:54.531 [2024-11-20 10:58:43.777841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:54.531 [2024-11-20 10:58:43.777854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.531 [2024-11-20 10:58:43.777863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:54.531 [2024-11-20 10:58:43.777877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:54.531 [2024-11-20 10:58:43.777886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.531 [2024-11-20 10:58:43.777900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:54.531 [2024-11-20 10:58:43.777920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:54.531 [2024-11-20 10:58:43.777934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.532 [2024-11-20 10:58:43.777943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:54.532 [2024-11-20 10:58:43.777961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:54.532 [2024-11-20 10:58:43.777970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.532 [2024-11-20 10:58:43.777984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:54.532 [2024-11-20 10:58:43.777993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:54.532 [2024-11-20 10:58:43.778006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.532 [2024-11-20 10:58:43.778016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:54.532 [2024-11-20 10:58:43.778045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:54.532 [2024-11-20 10:58:43.778054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:54.532 [2024-11-20 10:58:43.778068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:54.532 [2024-11-20 10:58:43.778078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:54.532 [2024-11-20 10:58:43.778092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.532 [2024-11-20 10:58:43.778102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:54.532 [2024-11-20 10:58:43.778115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:54.532 [2024-11-20 10:58:43.778125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:54.532 [2024-11-20 10:58:43.778138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:54.532 [2024-11-20 10:58:43.778148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:54.532 [2024-11-20 10:58:43.778165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.532 [2024-11-20 10:58:43.778175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:54.532 [2024-11-20 10:58:43.778189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:54.532 [2024-11-20 10:58:43.778198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.532 [2024-11-20 10:58:43.778212] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:54.532 [2024-11-20 10:58:43.778223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:54.532 [2024-11-20 10:58:43.778242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:54.532 [2024-11-20 10:58:43.778252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:54.532 [2024-11-20 10:58:43.778266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:54.532 [2024-11-20 10:58:43.778276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:54.532 [2024-11-20 10:58:43.778290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:54.532 [2024-11-20 10:58:43.778300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:54.532 [2024-11-20 10:58:43.778313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:54.532 [2024-11-20 10:58:43.778322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:54.532 [2024-11-20 10:58:43.778337] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:54.532 [2024-11-20 10:58:43.778350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.532 [2024-11-20 10:58:43.778368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:54.532 [2024-11-20 10:58:43.778378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:54.532 [2024-11-20 10:58:43.778392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:54.532 [2024-11-20 10:58:43.778402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:54.532 [2024-11-20 10:58:43.778415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:54.532 [2024-11-20 10:58:43.778425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:54.532 [2024-11-20 10:58:43.778437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:54.532 [2024-11-20 10:58:43.778447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:54.532 [2024-11-20 10:58:43.778459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:54.532 [2024-11-20 10:58:43.778469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:54.532 [2024-11-20 10:58:43.778481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:54.532 [2024-11-20 10:58:43.778491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:54.532 [2024-11-20 10:58:43.778510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:54.532 [2024-11-20 10:58:43.778521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:54.532 [2024-11-20 10:58:43.778533] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:54.532 [2024-11-20 10:58:43.778545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:54.532 [2024-11-20 10:58:43.778561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:54.532 [2024-11-20 10:58:43.778571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:54.532 [2024-11-20 10:58:43.778584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:54.532 [2024-11-20 10:58:43.778604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:54.532 [2024-11-20 10:58:43.778619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.532 [2024-11-20 10:58:43.778630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:54.532 [2024-11-20 10:58:43.778642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:21:54.532 [2024-11-20 10:58:43.778651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.790 [2024-11-20 10:58:43.817039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.817074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:54.791 [2024-11-20 10:58:43.817091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.383 ms 00:21:54.791 [2024-11-20 10:58:43.817102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.817217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.817229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:54.791 [2024-11-20 10:58:43.817244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:54.791 [2024-11-20 10:58:43.817254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.863094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.863131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:54.791 [2024-11-20 10:58:43.863156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.884 ms 00:21:54.791 [2024-11-20 10:58:43.863166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.863257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.863270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:54.791 [2024-11-20 10:58:43.863285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:54.791 [2024-11-20 10:58:43.863296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.863752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.863766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:54.791 [2024-11-20 10:58:43.863787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:21:54.791 [2024-11-20 10:58:43.863798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.863916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.863929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:54.791 [2024-11-20 10:58:43.863944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:54.791 [2024-11-20 10:58:43.863953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.885317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.885350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:54.791 [2024-11-20 10:58:43.885369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.368 ms 00:21:54.791 [2024-11-20 10:58:43.885379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.904163] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:54.791 [2024-11-20 10:58:43.904347] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:54.791 [2024-11-20 10:58:43.904466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.904481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:54.791 [2024-11-20 10:58:43.904495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.009 ms 00:21:54.791 [2024-11-20 10:58:43.904506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.932283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.932319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:54.791 [2024-11-20 10:58:43.932335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.689 ms 00:21:54.791 [2024-11-20 10:58:43.932345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.949539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.949727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:54.791 [2024-11-20 10:58:43.949759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.137 ms 00:21:54.791 [2024-11-20 10:58:43.949769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.966638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.966672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:54.791 [2024-11-20 10:58:43.966690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.816 ms 00:21:54.791 [2024-11-20 10:58:43.966699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:54.791 [2024-11-20 10:58:43.967414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:54.791 [2024-11-20 10:58:43.967436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:54.791 [2024-11-20 10:58:43.967452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:21:54.791 [2024-11-20 10:58:43.967478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.079318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.050 [2024-11-20 10:58:44.079372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:55.050 [2024-11-20 10:58:44.079411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.986 ms 00:21:55.050 [2024-11-20 10:58:44.079423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.090172] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:55.050 [2024-11-20 10:58:44.106092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.050 [2024-11-20 10:58:44.106154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:55.050 [2024-11-20 10:58:44.106192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.583 ms 00:21:55.050 [2024-11-20 10:58:44.106207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.106311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.050 [2024-11-20 10:58:44.106331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:55.050 [2024-11-20 10:58:44.106343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:55.050 [2024-11-20 10:58:44.106358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.106412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.050 [2024-11-20 10:58:44.106428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:55.050 [2024-11-20 10:58:44.106439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:55.050 [2024-11-20 10:58:44.106453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.106483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.050 [2024-11-20 10:58:44.106499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:55.050 [2024-11-20 10:58:44.106520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:55.050 [2024-11-20 10:58:44.106535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.106577] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:55.050 [2024-11-20 10:58:44.106616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.050 [2024-11-20 10:58:44.106627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:55.050 [2024-11-20 10:58:44.106645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:55.050 [2024-11-20 10:58:44.106655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.142867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.050 [2024-11-20 10:58:44.142903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:55.050 [2024-11-20 10:58:44.142920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.238 ms 00:21:55.050 [2024-11-20 10:58:44.142930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.143042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.050 [2024-11-20 10:58:44.143056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:55.050 [2024-11-20 10:58:44.143069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:55.050 [2024-11-20 10:58:44.143082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.050 [2024-11-20 10:58:44.144042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:55.050 [2024-11-20 10:58:44.148180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.411 ms, result 0 00:21:55.050 [2024-11-20 10:58:44.149477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:55.050 Some configs were skipped because the RPC state that can call them passed over. 00:21:55.050 10:58:44 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:21:55.309 [2024-11-20 10:58:44.389064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.309 [2024-11-20 10:58:44.389265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:55.309 [2024-11-20 10:58:44.389370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:21:55.309 [2024-11-20 10:58:44.389421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.309 [2024-11-20 10:58:44.389498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.180 ms, result 0 00:21:55.309 true 00:21:55.309 10:58:44 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:21:55.568 [2024-11-20 10:58:44.592471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.568 [2024-11-20 10:58:44.592518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:21:55.568 [2024-11-20 10:58:44.592540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:21:55.568 [2024-11-20 10:58:44.592551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.568 [2024-11-20 10:58:44.592614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.431 ms, result 0 00:21:55.568 true 00:21:55.568 10:58:44 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78229 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78229 ']' 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78229 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78229 00:21:55.568 killing process with pid 78229 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78229' 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78229 00:21:55.568 10:58:44 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78229 00:21:56.504 [2024-11-20 10:58:45.706054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.504 [2024-11-20 10:58:45.706118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:56.504 [2024-11-20 10:58:45.706134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:56.504 [2024-11-20 10:58:45.706145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.504 [2024-11-20 10:58:45.706166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:56.504 [2024-11-20 10:58:45.710258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.504 [2024-11-20 10:58:45.710291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:56.504 [2024-11-20 10:58:45.710308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.079 ms 00:21:56.504 [2024-11-20 10:58:45.710317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.504 [2024-11-20 10:58:45.710588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.504 [2024-11-20 10:58:45.710602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:56.504 [2024-11-20 10:58:45.710624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:56.504 [2024-11-20 10:58:45.710634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.504 [2024-11-20 10:58:45.713972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.504 [2024-11-20 10:58:45.714010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:56.504 [2024-11-20 10:58:45.714027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:21:56.504 [2024-11-20 10:58:45.714038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.504 [2024-11-20 10:58:45.719423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.504 [2024-11-20 10:58:45.719457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:56.504 [2024-11-20 10:58:45.719470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.352 ms 00:21:56.504 [2024-11-20 10:58:45.719480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.504 [2024-11-20 10:58:45.733649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.504 [2024-11-20 10:58:45.733682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:56.504 [2024-11-20 10:58:45.733700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.134 ms 00:21:56.504 [2024-11-20 10:58:45.733735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.504 [2024-11-20 10:58:45.744130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.504 [2024-11-20 10:58:45.744276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:56.504 [2024-11-20 10:58:45.744321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.340 ms 00:21:56.504 [2024-11-20 10:58:45.744331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.504 [2024-11-20 10:58:45.744475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.504 [2024-11-20 10:58:45.744488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:56.504 [2024-11-20 10:58:45.744501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:56.504 [2024-11-20 10:58:45.744510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.763 [2024-11-20 10:58:45.759406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.763 [2024-11-20 10:58:45.759554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:56.763 [2024-11-20 10:58:45.759578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.895 ms 00:21:56.763 [2024-11-20 10:58:45.759588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.763 [2024-11-20 10:58:45.773947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.763 [2024-11-20 10:58:45.774072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:56.763 [2024-11-20 10:58:45.774114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.294 ms 00:21:56.763 [2024-11-20 10:58:45.774123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.764 [2024-11-20 10:58:45.788227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.764 [2024-11-20 10:58:45.788351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:56.764 [2024-11-20 10:58:45.788392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.050 ms 00:21:56.764 [2024-11-20 10:58:45.788401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.764 [2024-11-20 10:58:45.802404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.764 [2024-11-20 10:58:45.802437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:56.764 [2024-11-20 10:58:45.802451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.927 ms 00:21:56.764 [2024-11-20 10:58:45.802460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.764 [2024-11-20 10:58:45.802545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:56.764 [2024-11-20 10:58:45.802561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.802996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:56.764 [2024-11-20 10:58:45.803602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:56.765 [2024-11-20 10:58:45.803873] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:56.765 [2024-11-20 10:58:45.803900] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b8e0acb-4646-4966-b129-dade0fcf8fcb 00:21:56.765 [2024-11-20 10:58:45.803923] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:56.765 [2024-11-20 10:58:45.803943] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:56.765 [2024-11-20 10:58:45.803953] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:56.765 [2024-11-20 10:58:45.803969] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:56.765 [2024-11-20 10:58:45.803978] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:56.765 [2024-11-20 10:58:45.803993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:56.765 [2024-11-20 10:58:45.804003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:56.765 [2024-11-20 10:58:45.804016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:56.765 [2024-11-20 10:58:45.804026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:56.765 [2024-11-20 10:58:45.804040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.765 [2024-11-20 10:58:45.804050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:56.765 [2024-11-20 10:58:45.804066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:21:56.765 [2024-11-20 10:58:45.804076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.765 [2024-11-20 10:58:45.822972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.765 [2024-11-20 10:58:45.823005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:56.765 [2024-11-20 10:58:45.823028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.890 ms 00:21:56.765 [2024-11-20 10:58:45.823038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.765 [2024-11-20 10:58:45.823649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:56.765 [2024-11-20 10:58:45.823665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:56.765 [2024-11-20 10:58:45.823697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:21:56.765 [2024-11-20 10:58:45.823711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.765 [2024-11-20 10:58:45.890630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.765 [2024-11-20 10:58:45.890665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:56.765 [2024-11-20 10:58:45.890683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.765 [2024-11-20 10:58:45.890709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.765 [2024-11-20 10:58:45.890793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.765 [2024-11-20 10:58:45.890805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:56.765 [2024-11-20 10:58:45.890820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.765 [2024-11-20 10:58:45.890835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.765 [2024-11-20 10:58:45.890886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.765 [2024-11-20 10:58:45.890899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:56.765 [2024-11-20 10:58:45.890918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.765 [2024-11-20 10:58:45.890928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.765 [2024-11-20 10:58:45.890951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.765 [2024-11-20 10:58:45.890962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:56.765 [2024-11-20 10:58:45.890976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.765 [2024-11-20 10:58:45.891002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:56.765 [2024-11-20 10:58:46.006493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:56.765 [2024-11-20 10:58:46.006546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:56.765 [2024-11-20 10:58:46.006566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:56.765 [2024-11-20 10:58:46.006576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.024 [2024-11-20 10:58:46.101502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.024 [2024-11-20 10:58:46.101547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:57.024 [2024-11-20 10:58:46.101566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.024 [2024-11-20 10:58:46.101582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.024 [2024-11-20 10:58:46.101689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.024 [2024-11-20 10:58:46.101703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:57.024 [2024-11-20 10:58:46.101739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.024 [2024-11-20 10:58:46.101749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.024 [2024-11-20 10:58:46.101784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.024 [2024-11-20 10:58:46.101796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:57.024 [2024-11-20 10:58:46.101808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.024 [2024-11-20 10:58:46.101818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.024 [2024-11-20 10:58:46.101939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.024 [2024-11-20 10:58:46.101953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:57.024 [2024-11-20 10:58:46.101966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.024 [2024-11-20 10:58:46.101975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.024 [2024-11-20 10:58:46.102015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.024 [2024-11-20 10:58:46.102028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:57.024 [2024-11-20 10:58:46.102040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.024 [2024-11-20 10:58:46.102050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.024 [2024-11-20 10:58:46.102091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.024 [2024-11-20 10:58:46.102105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:57.024 [2024-11-20 10:58:46.102121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.024 [2024-11-20 10:58:46.102131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.025 [2024-11-20 10:58:46.102177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:57.025 [2024-11-20 10:58:46.102188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:57.025 [2024-11-20 10:58:46.102201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:57.025 [2024-11-20 10:58:46.102211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:57.025 [2024-11-20 10:58:46.102346] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.909 ms, result 0 00:21:57.961 10:58:47 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:57.961 10:58:47 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:57.961 [2024-11-20 10:58:47.120616] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:21:57.961 [2024-11-20 10:58:47.120918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78287 ] 00:21:58.220 [2024-11-20 10:58:47.300915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.220 [2024-11-20 10:58:47.406464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.788 [2024-11-20 10:58:47.739870] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.788 [2024-11-20 10:58:47.739934] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:58.788 [2024-11-20 10:58:47.900535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.788 [2024-11-20 10:58:47.900582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:58.788 [2024-11-20 10:58:47.900609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:58.788 [2024-11-20 10:58:47.900620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.788 [2024-11-20 10:58:47.903652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.788 [2024-11-20 10:58:47.903690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.788 [2024-11-20 10:58:47.903703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.001 ms 00:21:58.788 [2024-11-20 10:58:47.903728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.788 [2024-11-20 10:58:47.903832] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:58.788 [2024-11-20 10:58:47.904787] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:58.788 [2024-11-20 10:58:47.904811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.788 [2024-11-20 10:58:47.904822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.788 [2024-11-20 10:58:47.904832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:21:58.788 [2024-11-20 10:58:47.904842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.788 [2024-11-20 10:58:47.906314] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:58.788 [2024-11-20 10:58:47.924961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.788 [2024-11-20 10:58:47.925001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:58.788 [2024-11-20 10:58:47.925015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.677 ms 00:21:58.788 [2024-11-20 10:58:47.925025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.788 [2024-11-20 10:58:47.925113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.788 [2024-11-20 10:58:47.925126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:58.789 [2024-11-20 10:58:47.925137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:58.789 [2024-11-20 10:58:47.925147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.789 [2024-11-20 10:58:47.931731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.789 [2024-11-20 10:58:47.931864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.789 [2024-11-20 10:58:47.931899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.556 ms 00:21:58.789 [2024-11-20 10:58:47.931908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.789 [2024-11-20 10:58:47.932009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.789 [2024-11-20 10:58:47.932022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.789 [2024-11-20 10:58:47.932033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:58.789 [2024-11-20 10:58:47.932042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.789 [2024-11-20 10:58:47.932069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.789 [2024-11-20 10:58:47.932083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:58.789 [2024-11-20 10:58:47.932093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:58.789 [2024-11-20 10:58:47.932102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.789 [2024-11-20 10:58:47.932123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:58.789 [2024-11-20 10:58:47.936806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.789 [2024-11-20 10:58:47.936836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.789 [2024-11-20 10:58:47.936847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.695 ms 00:21:58.789 [2024-11-20 10:58:47.936856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.789 [2024-11-20 10:58:47.936934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.789 [2024-11-20 10:58:47.936947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:58.789 [2024-11-20 10:58:47.936957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:58.789 [2024-11-20 10:58:47.936966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.789 [2024-11-20 10:58:47.936985] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:58.789 [2024-11-20 10:58:47.937010] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:58.789 [2024-11-20 10:58:47.937044] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:58.789 [2024-11-20 10:58:47.937060] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:58.789 [2024-11-20 10:58:47.937146] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:58.789 [2024-11-20 10:58:47.937158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:58.789 [2024-11-20 10:58:47.937171] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:58.789 [2024-11-20 10:58:47.937183] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937198] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937209] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:58.789 [2024-11-20 10:58:47.937218] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:58.789 [2024-11-20 10:58:47.937228] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:58.789 [2024-11-20 10:58:47.937237] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:58.789 [2024-11-20 10:58:47.937247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.789 [2024-11-20 10:58:47.937257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:58.789 [2024-11-20 10:58:47.937266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:21:58.789 [2024-11-20 10:58:47.937276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.789 [2024-11-20 10:58:47.937349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.789 [2024-11-20 10:58:47.937360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:58.789 [2024-11-20 10:58:47.937373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:58.789 [2024-11-20 10:58:47.937383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.789 [2024-11-20 10:58:47.937470] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:58.789 [2024-11-20 10:58:47.937483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:58.789 [2024-11-20 10:58:47.937493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:58.789 [2024-11-20 10:58:47.937522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:58.789 [2024-11-20 10:58:47.937551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.789 [2024-11-20 10:58:47.937569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:58.789 [2024-11-20 10:58:47.937578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:58.789 [2024-11-20 10:58:47.937587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:58.789 [2024-11-20 10:58:47.937606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:58.789 [2024-11-20 10:58:47.937630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:58.789 [2024-11-20 10:58:47.937640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:58.789 [2024-11-20 10:58:47.937658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:58.789 [2024-11-20 10:58:47.937684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:58.789 [2024-11-20 10:58:47.937711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:58.789 [2024-11-20 10:58:47.937738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:58.789 [2024-11-20 10:58:47.937764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:58.789 [2024-11-20 10:58:47.937790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.789 [2024-11-20 10:58:47.937807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:58.789 [2024-11-20 10:58:47.937816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:58.789 [2024-11-20 10:58:47.937835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:58.789 [2024-11-20 10:58:47.937843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:58.789 [2024-11-20 10:58:47.937851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:58.789 [2024-11-20 10:58:47.937859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:58.789 [2024-11-20 10:58:47.937875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:58.789 [2024-11-20 10:58:47.937886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937895] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:58.789 [2024-11-20 10:58:47.937904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:58.789 [2024-11-20 10:58:47.937913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:58.789 [2024-11-20 10:58:47.937934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:58.789 [2024-11-20 10:58:47.937943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:58.789 [2024-11-20 10:58:47.937951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:58.789 [2024-11-20 10:58:47.937960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:58.789 [2024-11-20 10:58:47.937968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:58.789 [2024-11-20 10:58:47.937976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:58.789 [2024-11-20 10:58:47.937986] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:58.789 [2024-11-20 10:58:47.937998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.789 [2024-11-20 10:58:47.938009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:58.789 [2024-11-20 10:58:47.938018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:58.789 [2024-11-20 10:58:47.938027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:58.790 [2024-11-20 10:58:47.938037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:58.790 [2024-11-20 10:58:47.938046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:58.790 [2024-11-20 10:58:47.938056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:58.790 [2024-11-20 10:58:47.938065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:58.790 [2024-11-20 10:58:47.938074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:58.790 [2024-11-20 10:58:47.938084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:58.790 [2024-11-20 10:58:47.938093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:58.790 [2024-11-20 10:58:47.938102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:58.790 [2024-11-20 10:58:47.938111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:58.790 [2024-11-20 10:58:47.938121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:58.790 [2024-11-20 10:58:47.938131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:58.790 [2024-11-20 10:58:47.938140] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:58.790 [2024-11-20 10:58:47.938150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:58.790 [2024-11-20 10:58:47.938161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:58.790 [2024-11-20 10:58:47.938170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:58.790 [2024-11-20 10:58:47.938179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:58.790 [2024-11-20 10:58:47.938190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:58.790 [2024-11-20 10:58:47.938200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.790 [2024-11-20 10:58:47.938210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:58.790 [2024-11-20 10:58:47.938223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:21:58.790 [2024-11-20 10:58:47.938231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.790 [2024-11-20 10:58:47.974814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.790 [2024-11-20 10:58:47.974848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.790 [2024-11-20 10:58:47.974860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.593 ms 00:21:58.790 [2024-11-20 10:58:47.974886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.790 [2024-11-20 10:58:47.974995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.790 [2024-11-20 10:58:47.975013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:58.790 [2024-11-20 10:58:47.975023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:58.790 [2024-11-20 10:58:47.975033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.048900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.048937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.049 [2024-11-20 10:58:48.048950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.965 ms 00:21:59.049 [2024-11-20 10:58:48.048963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.049052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.049065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.049 [2024-11-20 10:58:48.049075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:59.049 [2024-11-20 10:58:48.049084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.049501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.049513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.049 [2024-11-20 10:58:48.049523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:21:59.049 [2024-11-20 10:58:48.049538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.049680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.049695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.049 [2024-11-20 10:58:48.049705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:21:59.049 [2024-11-20 10:58:48.049734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.067743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.067776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.049 [2024-11-20 10:58:48.067788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.016 ms 00:21:59.049 [2024-11-20 10:58:48.067797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.086361] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:21:59.049 [2024-11-20 10:58:48.086399] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:59.049 [2024-11-20 10:58:48.086413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.086423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:59.049 [2024-11-20 10:58:48.086433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.548 ms 00:21:59.049 [2024-11-20 10:58:48.086442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.114779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.114826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:59.049 [2024-11-20 10:58:48.114839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.299 ms 00:21:59.049 [2024-11-20 10:58:48.114848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.131891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.131926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:59.049 [2024-11-20 10:58:48.131939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.994 ms 00:21:59.049 [2024-11-20 10:58:48.131947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.149351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.149385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:59.049 [2024-11-20 10:58:48.149397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.361 ms 00:21:59.049 [2024-11-20 10:58:48.149406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.150143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.150171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:59.049 [2024-11-20 10:58:48.150182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:21:59.049 [2024-11-20 10:58:48.150192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.233625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.233675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:59.049 [2024-11-20 10:58:48.233691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.539 ms 00:21:59.049 [2024-11-20 10:58:48.233701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.244022] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:59.049 [2024-11-20 10:58:48.259135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.049 [2024-11-20 10:58:48.259331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:59.049 [2024-11-20 10:58:48.259355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.384 ms 00:21:59.049 [2024-11-20 10:58:48.259366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.049 [2024-11-20 10:58:48.259485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.050 [2024-11-20 10:58:48.259498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:59.050 [2024-11-20 10:58:48.259509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:59.050 [2024-11-20 10:58:48.259519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.050 [2024-11-20 10:58:48.259570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.050 [2024-11-20 10:58:48.259581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:59.050 [2024-11-20 10:58:48.259612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:59.050 [2024-11-20 10:58:48.259622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.050 [2024-11-20 10:58:48.259668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.050 [2024-11-20 10:58:48.259682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:59.050 [2024-11-20 10:58:48.259693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:59.050 [2024-11-20 10:58:48.259703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.050 [2024-11-20 10:58:48.259735] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:59.050 [2024-11-20 10:58:48.259748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.050 [2024-11-20 10:58:48.259758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:59.050 [2024-11-20 10:58:48.259768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:59.050 [2024-11-20 10:58:48.259778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.050 [2024-11-20 10:58:48.294754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.050 [2024-11-20 10:58:48.294793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:59.050 [2024-11-20 10:58:48.294807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.009 ms 00:21:59.050 [2024-11-20 10:58:48.294817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.050 [2024-11-20 10:58:48.294924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.050 [2024-11-20 10:58:48.294937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:59.050 [2024-11-20 10:58:48.294948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:59.050 [2024-11-20 10:58:48.294957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.050 [2024-11-20 10:58:48.295852] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:59.050 [2024-11-20 10:58:48.299957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.647 ms, result 0 00:21:59.308 [2024-11-20 10:58:48.300888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.308 [2024-11-20 10:58:48.318833] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:00.242  [2024-11-20T10:58:50.430Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T10:58:51.364Z] Copying: 52/256 [MB] (24 MBps) [2024-11-20T10:58:52.737Z] Copying: 77/256 [MB] (25 MBps) [2024-11-20T10:58:53.670Z] Copying: 101/256 [MB] (24 MBps) [2024-11-20T10:58:54.605Z] Copying: 126/256 [MB] (24 MBps) [2024-11-20T10:58:55.542Z] Copying: 150/256 [MB] (24 MBps) [2024-11-20T10:58:56.477Z] Copying: 175/256 [MB] (24 MBps) [2024-11-20T10:58:57.413Z] Copying: 199/256 [MB] (23 MBps) [2024-11-20T10:58:58.348Z] Copying: 223/256 [MB] (24 MBps) [2024-11-20T10:58:58.916Z] Copying: 247/256 [MB] (23 MBps) [2024-11-20T10:58:58.917Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-20 10:58:58.659510] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:09.664 [2024-11-20 10:58:58.673783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.673950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.664 [2024-11-20 10:58:58.673974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:09.664 [2024-11-20 10:58:58.673992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.674021] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:09.664 [2024-11-20 10:58:58.678124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.678149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.664 [2024-11-20 10:58:58.678160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:22:09.664 [2024-11-20 10:58:58.678169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.678394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.678406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.664 [2024-11-20 10:58:58.678416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:22:09.664 [2024-11-20 10:58:58.678426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.681264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.681381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.664 [2024-11-20 10:58:58.681416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:22:09.664 [2024-11-20 10:58:58.681426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.686806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.686835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.664 [2024-11-20 10:58:58.686846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.366 ms 00:22:09.664 [2024-11-20 10:58:58.686855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.721384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.721421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.664 [2024-11-20 10:58:58.721433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.511 ms 00:22:09.664 [2024-11-20 10:58:58.721442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.741421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.741461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.664 [2024-11-20 10:58:58.741474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.960 ms 00:22:09.664 [2024-11-20 10:58:58.741486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.741621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.741651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.664 [2024-11-20 10:58:58.741661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:09.664 [2024-11-20 10:58:58.741671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.775875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.775909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.664 [2024-11-20 10:58:58.775921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.230 ms 00:22:09.664 [2024-11-20 10:58:58.775946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.809818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.809861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.664 [2024-11-20 10:58:58.809873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.866 ms 00:22:09.664 [2024-11-20 10:58:58.809898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.844037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.844071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.664 [2024-11-20 10:58:58.844082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.143 ms 00:22:09.664 [2024-11-20 10:58:58.844091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.877835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.664 [2024-11-20 10:58:58.877868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.664 [2024-11-20 10:58:58.877880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.725 ms 00:22:09.664 [2024-11-20 10:58:58.877889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.664 [2024-11-20 10:58:58.877937] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.664 [2024-11-20 10:58:58.877952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.877963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.877974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.877983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.877993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.664 [2024-11-20 10:58:58.878283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.665 [2024-11-20 10:58:58.878997] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.665 [2024-11-20 10:58:58.879006] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b8e0acb-4646-4966-b129-dade0fcf8fcb 00:22:09.665 [2024-11-20 10:58:58.879017] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:09.665 [2024-11-20 10:58:58.879026] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:09.665 [2024-11-20 10:58:58.879035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:09.665 [2024-11-20 10:58:58.879045] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:09.665 [2024-11-20 10:58:58.879054] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.665 [2024-11-20 10:58:58.879064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.665 [2024-11-20 10:58:58.879073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.665 [2024-11-20 10:58:58.879081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.665 [2024-11-20 10:58:58.879089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.665 [2024-11-20 10:58:58.879098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.665 [2024-11-20 10:58:58.879112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.665 [2024-11-20 10:58:58.879121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:22:09.665 [2024-11-20 10:58:58.879131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.665 [2024-11-20 10:58:58.898071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.665 [2024-11-20 10:58:58.898201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.665 [2024-11-20 10:58:58.898235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.936 ms 00:22:09.665 [2024-11-20 10:58:58.898245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.665 [2024-11-20 10:58:58.898782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.665 [2024-11-20 10:58:58.898796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.665 [2024-11-20 10:58:58.898806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:22:09.665 [2024-11-20 10:58:58.898816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.924 [2024-11-20 10:58:58.950157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.924 [2024-11-20 10:58:58.950192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.924 [2024-11-20 10:58:58.950204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:58.950214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:58.950333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:58.950347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.925 [2024-11-20 10:58:58.950357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:58.950366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:58.950411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:58.950423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.925 [2024-11-20 10:58:58.950432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:58.950442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:58.950460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:58.950477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.925 [2024-11-20 10:58:58.950487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:58.950496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.068009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:59.068061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:09.925 [2024-11-20 10:58:59.068074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:59.068100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.164036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:59.164092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:09.925 [2024-11-20 10:58:59.164105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:59.164131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.164192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:59.164204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:09.925 [2024-11-20 10:58:59.164214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:59.164224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.164252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:59.164263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:09.925 [2024-11-20 10:58:59.164282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:59.164293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.164410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:59.164423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:09.925 [2024-11-20 10:58:59.164433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:59.164444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.164479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:59.164491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:09.925 [2024-11-20 10:58:59.164501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:59.164519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.164558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:59.164570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:09.925 [2024-11-20 10:58:59.164580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:59.164590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.164651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.925 [2024-11-20 10:58:59.164663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:09.925 [2024-11-20 10:58:59.164681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.925 [2024-11-20 10:58:59.164691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.925 [2024-11-20 10:58:59.164835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 491.834 ms, result 0 00:22:11.317 00:22:11.317 00:22:11.317 10:59:00 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:11.317 10:59:00 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:11.628 10:59:00 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:11.628 [2024-11-20 10:59:00.668935] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:22:11.628 [2024-11-20 10:59:00.669062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78436 ] 00:22:11.628 [2024-11-20 10:59:00.845570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.886 [2024-11-20 10:59:00.951196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.145 [2024-11-20 10:59:01.286687] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:12.145 [2024-11-20 10:59:01.286754] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:12.405 [2024-11-20 10:59:01.447572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.447627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:12.405 [2024-11-20 10:59:01.447642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:12.405 [2024-11-20 10:59:01.447652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.450656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.450693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:12.405 [2024-11-20 10:59:01.450706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.989 ms 00:22:12.405 [2024-11-20 10:59:01.450715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.450825] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:12.405 [2024-11-20 10:59:01.451768] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:12.405 [2024-11-20 10:59:01.451793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.451804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:12.405 [2024-11-20 10:59:01.451815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:22:12.405 [2024-11-20 10:59:01.451824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.453432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:12.405 [2024-11-20 10:59:01.471568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.471627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:12.405 [2024-11-20 10:59:01.471640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.167 ms 00:22:12.405 [2024-11-20 10:59:01.471666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.471764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.471779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:12.405 [2024-11-20 10:59:01.471790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:12.405 [2024-11-20 10:59:01.471800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.478345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.478372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:12.405 [2024-11-20 10:59:01.478383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.516 ms 00:22:12.405 [2024-11-20 10:59:01.478393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.478482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.478496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:12.405 [2024-11-20 10:59:01.478513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:12.405 [2024-11-20 10:59:01.478523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.478565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.478580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:12.405 [2024-11-20 10:59:01.478590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:12.405 [2024-11-20 10:59:01.478600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.478629] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:12.405 [2024-11-20 10:59:01.483407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.483568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:12.405 [2024-11-20 10:59:01.483702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:22:12.405 [2024-11-20 10:59:01.483740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.483831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.405 [2024-11-20 10:59:01.483867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:12.405 [2024-11-20 10:59:01.483957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:12.405 [2024-11-20 10:59:01.483992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.405 [2024-11-20 10:59:01.484042] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:12.405 [2024-11-20 10:59:01.484091] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:12.405 [2024-11-20 10:59:01.484214] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:12.405 [2024-11-20 10:59:01.484413] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:12.405 [2024-11-20 10:59:01.484541] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:12.405 [2024-11-20 10:59:01.484648] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:12.405 [2024-11-20 10:59:01.484704] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:12.405 [2024-11-20 10:59:01.484755] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:12.405 [2024-11-20 10:59:01.484883] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:12.406 [2024-11-20 10:59:01.484989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:12.406 [2024-11-20 10:59:01.485021] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:12.406 [2024-11-20 10:59:01.485050] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:12.406 [2024-11-20 10:59:01.485080] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:12.406 [2024-11-20 10:59:01.485222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.406 [2024-11-20 10:59:01.485237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:12.406 [2024-11-20 10:59:01.485249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:22:12.406 [2024-11-20 10:59:01.485259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.406 [2024-11-20 10:59:01.485345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.406 [2024-11-20 10:59:01.485356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:12.406 [2024-11-20 10:59:01.485372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:12.406 [2024-11-20 10:59:01.485382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.406 [2024-11-20 10:59:01.485472] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:12.406 [2024-11-20 10:59:01.485484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:12.406 [2024-11-20 10:59:01.485495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:12.406 [2024-11-20 10:59:01.485526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:12.406 [2024-11-20 10:59:01.485555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:12.406 [2024-11-20 10:59:01.485574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:12.406 [2024-11-20 10:59:01.485583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:12.406 [2024-11-20 10:59:01.485592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:12.406 [2024-11-20 10:59:01.485623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:12.406 [2024-11-20 10:59:01.485633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:12.406 [2024-11-20 10:59:01.485642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:12.406 [2024-11-20 10:59:01.485661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:12.406 [2024-11-20 10:59:01.485688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:12.406 [2024-11-20 10:59:01.485715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:12.406 [2024-11-20 10:59:01.485743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:12.406 [2024-11-20 10:59:01.485770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:12.406 [2024-11-20 10:59:01.485798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:12.406 [2024-11-20 10:59:01.485816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:12.406 [2024-11-20 10:59:01.485825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:12.406 [2024-11-20 10:59:01.485835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:12.406 [2024-11-20 10:59:01.485844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:12.406 [2024-11-20 10:59:01.485853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:12.406 [2024-11-20 10:59:01.485861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:12.406 [2024-11-20 10:59:01.485879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:12.406 [2024-11-20 10:59:01.485888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485897] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:12.406 [2024-11-20 10:59:01.485907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:12.406 [2024-11-20 10:59:01.485917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:12.406 [2024-11-20 10:59:01.485940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:12.406 [2024-11-20 10:59:01.485949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:12.406 [2024-11-20 10:59:01.485959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:12.406 [2024-11-20 10:59:01.485968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:12.406 [2024-11-20 10:59:01.485977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:12.406 [2024-11-20 10:59:01.485986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:12.406 [2024-11-20 10:59:01.485997] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:12.406 [2024-11-20 10:59:01.486010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:12.406 [2024-11-20 10:59:01.486022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:12.406 [2024-11-20 10:59:01.486033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:12.406 [2024-11-20 10:59:01.486043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:12.406 [2024-11-20 10:59:01.486053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:12.406 [2024-11-20 10:59:01.486064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:12.406 [2024-11-20 10:59:01.486074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:12.406 [2024-11-20 10:59:01.486084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:12.406 [2024-11-20 10:59:01.486095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:12.406 [2024-11-20 10:59:01.486105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:12.406 [2024-11-20 10:59:01.486116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:12.406 [2024-11-20 10:59:01.486126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:12.406 [2024-11-20 10:59:01.486136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:12.406 [2024-11-20 10:59:01.486147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:12.406 [2024-11-20 10:59:01.486158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:12.406 [2024-11-20 10:59:01.486168] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:12.406 [2024-11-20 10:59:01.486180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:12.406 [2024-11-20 10:59:01.486190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:12.406 [2024-11-20 10:59:01.486201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:12.406 [2024-11-20 10:59:01.486212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:12.406 [2024-11-20 10:59:01.486222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:12.406 [2024-11-20 10:59:01.486233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.406 [2024-11-20 10:59:01.486243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:12.406 [2024-11-20 10:59:01.486257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:22:12.406 [2024-11-20 10:59:01.486266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.406 [2024-11-20 10:59:01.525119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.406 [2024-11-20 10:59:01.525256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:12.406 [2024-11-20 10:59:01.525340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.864 ms 00:22:12.406 [2024-11-20 10:59:01.525376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.407 [2024-11-20 10:59:01.525522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.407 [2024-11-20 10:59:01.525564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:12.407 [2024-11-20 10:59:01.525662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:12.407 [2024-11-20 10:59:01.525698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.407 [2024-11-20 10:59:01.584797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.407 [2024-11-20 10:59:01.584931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:12.407 [2024-11-20 10:59:01.585004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.143 ms 00:22:12.407 [2024-11-20 10:59:01.585046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.407 [2024-11-20 10:59:01.585162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.407 [2024-11-20 10:59:01.585199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:12.407 [2024-11-20 10:59:01.585231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:12.407 [2024-11-20 10:59:01.585261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.407 [2024-11-20 10:59:01.585795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.407 [2024-11-20 10:59:01.585896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:12.407 [2024-11-20 10:59:01.585970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:22:12.407 [2024-11-20 10:59:01.586013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.407 [2024-11-20 10:59:01.586158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.407 [2024-11-20 10:59:01.586193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:12.407 [2024-11-20 10:59:01.586264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:22:12.407 [2024-11-20 10:59:01.586298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.407 [2024-11-20 10:59:01.606376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.407 [2024-11-20 10:59:01.606537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:12.407 [2024-11-20 10:59:01.606626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.062 ms 00:22:12.407 [2024-11-20 10:59:01.606664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.407 [2024-11-20 10:59:01.625955] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:12.407 [2024-11-20 10:59:01.626134] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:12.407 [2024-11-20 10:59:01.626267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.407 [2024-11-20 10:59:01.626300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:12.407 [2024-11-20 10:59:01.626331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.490 ms 00:22:12.407 [2024-11-20 10:59:01.626362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.407 [2024-11-20 10:59:01.655516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.407 [2024-11-20 10:59:01.655617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:12.407 [2024-11-20 10:59:01.655658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.102 ms 00:22:12.407 [2024-11-20 10:59:01.655689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.665 [2024-11-20 10:59:01.673615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.665 [2024-11-20 10:59:01.673772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:12.665 [2024-11-20 10:59:01.673883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.816 ms 00:22:12.665 [2024-11-20 10:59:01.673919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.665 [2024-11-20 10:59:01.691225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.691352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:12.666 [2024-11-20 10:59:01.691478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.242 ms 00:22:12.666 [2024-11-20 10:59:01.691514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.692351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.692466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:12.666 [2024-11-20 10:59:01.692541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:22:12.666 [2024-11-20 10:59:01.692557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.771984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.772046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:12.666 [2024-11-20 10:59:01.772061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.521 ms 00:22:12.666 [2024-11-20 10:59:01.772071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.782392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:12.666 [2024-11-20 10:59:01.797355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.797400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:12.666 [2024-11-20 10:59:01.797416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.236 ms 00:22:12.666 [2024-11-20 10:59:01.797426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.797538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.797551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:12.666 [2024-11-20 10:59:01.797562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:12.666 [2024-11-20 10:59:01.797571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.797652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.797666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:12.666 [2024-11-20 10:59:01.797677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:12.666 [2024-11-20 10:59:01.797686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.797716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.797731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:12.666 [2024-11-20 10:59:01.797755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:12.666 [2024-11-20 10:59:01.797765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.797798] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:12.666 [2024-11-20 10:59:01.797810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.797820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:12.666 [2024-11-20 10:59:01.797830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:12.666 [2024-11-20 10:59:01.797839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.832262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.832302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:12.666 [2024-11-20 10:59:01.832316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.454 ms 00:22:12.666 [2024-11-20 10:59:01.832326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.832434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.666 [2024-11-20 10:59:01.832448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:12.666 [2024-11-20 10:59:01.832458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:12.666 [2024-11-20 10:59:01.832468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.666 [2024-11-20 10:59:01.833396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:12.666 [2024-11-20 10:59:01.837454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.173 ms, result 0 00:22:12.666 [2024-11-20 10:59:01.838368] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:12.666 [2024-11-20 10:59:01.856003] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:12.924  [2024-11-20T10:59:02.177Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-11-20 10:59:02.036778] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:12.924 [2024-11-20 10:59:02.049867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.924 [2024-11-20 10:59:02.049902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:12.924 [2024-11-20 10:59:02.049914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:12.924 [2024-11-20 10:59:02.049928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.924 [2024-11-20 10:59:02.049948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:12.924 [2024-11-20 10:59:02.054044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.924 [2024-11-20 10:59:02.054163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:12.924 [2024-11-20 10:59:02.054198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.089 ms 00:22:12.924 [2024-11-20 10:59:02.054208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.924 [2024-11-20 10:59:02.056049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.924 [2024-11-20 10:59:02.056085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:12.924 [2024-11-20 10:59:02.056097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:22:12.924 [2024-11-20 10:59:02.056107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.924 [2024-11-20 10:59:02.059321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.924 [2024-11-20 10:59:02.059360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:12.924 [2024-11-20 10:59:02.059372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.201 ms 00:22:12.924 [2024-11-20 10:59:02.059381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.924 [2024-11-20 10:59:02.064875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.924 [2024-11-20 10:59:02.065014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:12.924 [2024-11-20 10:59:02.065034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.473 ms 00:22:12.924 [2024-11-20 10:59:02.065044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.924 [2024-11-20 10:59:02.098997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.924 [2024-11-20 10:59:02.099123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:12.924 [2024-11-20 10:59:02.099157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.948 ms 00:22:12.924 [2024-11-20 10:59:02.099166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.924 [2024-11-20 10:59:02.119514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.924 [2024-11-20 10:59:02.119556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:12.924 [2024-11-20 10:59:02.119572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.286 ms 00:22:12.924 [2024-11-20 10:59:02.119582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.925 [2024-11-20 10:59:02.119731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.925 [2024-11-20 10:59:02.119746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:12.925 [2024-11-20 10:59:02.119756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:12.925 [2024-11-20 10:59:02.119766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.925 [2024-11-20 10:59:02.153766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.925 [2024-11-20 10:59:02.153800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:12.925 [2024-11-20 10:59:02.153812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.028 ms 00:22:12.925 [2024-11-20 10:59:02.153837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.184 [2024-11-20 10:59:02.188313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.184 [2024-11-20 10:59:02.188440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:13.184 [2024-11-20 10:59:02.188475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.480 ms 00:22:13.184 [2024-11-20 10:59:02.188485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.184 [2024-11-20 10:59:02.221674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.184 [2024-11-20 10:59:02.221709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:13.184 [2024-11-20 10:59:02.221720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.171 ms 00:22:13.184 [2024-11-20 10:59:02.221729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.184 [2024-11-20 10:59:02.255755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.184 [2024-11-20 10:59:02.255790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:13.184 [2024-11-20 10:59:02.255802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.989 ms 00:22:13.184 [2024-11-20 10:59:02.255810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.184 [2024-11-20 10:59:02.255855] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:13.184 [2024-11-20 10:59:02.255871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.255997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.256006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.256016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.256025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.256034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:13.184 [2024-11-20 10:59:02.256043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:13.185 [2024-11-20 10:59:02.256899] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:13.185 [2024-11-20 10:59:02.256908] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b8e0acb-4646-4966-b129-dade0fcf8fcb 00:22:13.185 [2024-11-20 10:59:02.256919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:13.185 [2024-11-20 10:59:02.256928] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:13.185 [2024-11-20 10:59:02.256937] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:13.185 [2024-11-20 10:59:02.256947] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:13.185 [2024-11-20 10:59:02.256956] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:13.185 [2024-11-20 10:59:02.256965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:13.185 [2024-11-20 10:59:02.256974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:13.185 [2024-11-20 10:59:02.256983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:13.185 [2024-11-20 10:59:02.256991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:13.186 [2024-11-20 10:59:02.257001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.186 [2024-11-20 10:59:02.257014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:13.186 [2024-11-20 10:59:02.257024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.148 ms 00:22:13.186 [2024-11-20 10:59:02.257033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.186 [2024-11-20 10:59:02.276078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.186 [2024-11-20 10:59:02.276109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:13.186 [2024-11-20 10:59:02.276121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.057 ms 00:22:13.186 [2024-11-20 10:59:02.276130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.186 [2024-11-20 10:59:02.276655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.186 [2024-11-20 10:59:02.276670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:13.186 [2024-11-20 10:59:02.276680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:22:13.186 [2024-11-20 10:59:02.276690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.186 [2024-11-20 10:59:02.329408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.186 [2024-11-20 10:59:02.329442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.186 [2024-11-20 10:59:02.329454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.186 [2024-11-20 10:59:02.329463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.186 [2024-11-20 10:59:02.329549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.186 [2024-11-20 10:59:02.329560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.186 [2024-11-20 10:59:02.329569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.186 [2024-11-20 10:59:02.329579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.186 [2024-11-20 10:59:02.329652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.186 [2024-11-20 10:59:02.329666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.186 [2024-11-20 10:59:02.329676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.186 [2024-11-20 10:59:02.329685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.186 [2024-11-20 10:59:02.329703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.186 [2024-11-20 10:59:02.329729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.186 [2024-11-20 10:59:02.329740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.186 [2024-11-20 10:59:02.329749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.443 [2024-11-20 10:59:02.444599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.443 [2024-11-20 10:59:02.444811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.443 [2024-11-20 10:59:02.444834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.443 [2024-11-20 10:59:02.444845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.443 [2024-11-20 10:59:02.538397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.443 [2024-11-20 10:59:02.538441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.443 [2024-11-20 10:59:02.538453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.443 [2024-11-20 10:59:02.538463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.443 [2024-11-20 10:59:02.538528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.443 [2024-11-20 10:59:02.538539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.443 [2024-11-20 10:59:02.538550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.443 [2024-11-20 10:59:02.538559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.443 [2024-11-20 10:59:02.538585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.443 [2024-11-20 10:59:02.538610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.443 [2024-11-20 10:59:02.538626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.443 [2024-11-20 10:59:02.538652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.443 [2024-11-20 10:59:02.538763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.443 [2024-11-20 10:59:02.538776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.443 [2024-11-20 10:59:02.538787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.443 [2024-11-20 10:59:02.538796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.443 [2024-11-20 10:59:02.538831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.443 [2024-11-20 10:59:02.538842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:13.443 [2024-11-20 10:59:02.538853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.443 [2024-11-20 10:59:02.538866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.443 [2024-11-20 10:59:02.538904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.444 [2024-11-20 10:59:02.538915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.444 [2024-11-20 10:59:02.538925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.444 [2024-11-20 10:59:02.538934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.444 [2024-11-20 10:59:02.538976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.444 [2024-11-20 10:59:02.538987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.444 [2024-11-20 10:59:02.539000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.444 [2024-11-20 10:59:02.539009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.444 [2024-11-20 10:59:02.539158] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 490.071 ms, result 0 00:22:14.378 00:22:14.378 00:22:14.378 10:59:03 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78461 00:22:14.378 10:59:03 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:14.378 10:59:03 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78461 00:22:14.378 10:59:03 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78461 ']' 00:22:14.378 10:59:03 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.378 10:59:03 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.378 10:59:03 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.378 10:59:03 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.378 10:59:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:14.637 [2024-11-20 10:59:03.647186] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:22:14.637 [2024-11-20 10:59:03.647302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78461 ] 00:22:14.637 [2024-11-20 10:59:03.826403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.896 [2024-11-20 10:59:03.933636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.831 10:59:04 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.831 10:59:04 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:15.831 10:59:04 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:15.831 [2024-11-20 10:59:04.958653] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:15.831 [2024-11-20 10:59:04.958706] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:16.091 [2024-11-20 10:59:05.144122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.144325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:16.091 [2024-11-20 10:59:05.144358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:16.091 [2024-11-20 10:59:05.144370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.148207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.148247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:16.091 [2024-11-20 10:59:05.148262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.814 ms 00:22:16.091 [2024-11-20 10:59:05.148273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.148379] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:16.091 [2024-11-20 10:59:05.149389] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:16.091 [2024-11-20 10:59:05.149428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.149439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:16.091 [2024-11-20 10:59:05.149452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:22:16.091 [2024-11-20 10:59:05.149461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.150939] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:16.091 [2024-11-20 10:59:05.170161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.170207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:16.091 [2024-11-20 10:59:05.170221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.258 ms 00:22:16.091 [2024-11-20 10:59:05.170235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.170327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.170345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:16.091 [2024-11-20 10:59:05.170356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:16.091 [2024-11-20 10:59:05.170369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.176968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.177148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:16.091 [2024-11-20 10:59:05.177168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.557 ms 00:22:16.091 [2024-11-20 10:59:05.177184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.177323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.177343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:16.091 [2024-11-20 10:59:05.177354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:16.091 [2024-11-20 10:59:05.177369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.177407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.177423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:16.091 [2024-11-20 10:59:05.177433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:16.091 [2024-11-20 10:59:05.177448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.177472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:16.091 [2024-11-20 10:59:05.182060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.182089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:16.091 [2024-11-20 10:59:05.182105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.596 ms 00:22:16.091 [2024-11-20 10:59:05.182131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.182205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.182217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:16.091 [2024-11-20 10:59:05.182232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:16.091 [2024-11-20 10:59:05.182247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.182274] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:16.091 [2024-11-20 10:59:05.182295] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:16.091 [2024-11-20 10:59:05.182355] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:16.091 [2024-11-20 10:59:05.182374] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:16.091 [2024-11-20 10:59:05.182465] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:16.091 [2024-11-20 10:59:05.182478] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:16.091 [2024-11-20 10:59:05.182498] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:16.091 [2024-11-20 10:59:05.182522] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:16.091 [2024-11-20 10:59:05.182538] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:16.091 [2024-11-20 10:59:05.182549] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:16.091 [2024-11-20 10:59:05.182564] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:16.091 [2024-11-20 10:59:05.182573] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:16.091 [2024-11-20 10:59:05.182609] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:16.091 [2024-11-20 10:59:05.182620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.182635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:16.091 [2024-11-20 10:59:05.182645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:22:16.091 [2024-11-20 10:59:05.182660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.182738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.091 [2024-11-20 10:59:05.182754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:16.091 [2024-11-20 10:59:05.182764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:16.091 [2024-11-20 10:59:05.182778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.091 [2024-11-20 10:59:05.182863] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:16.091 [2024-11-20 10:59:05.182880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:16.091 [2024-11-20 10:59:05.182891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.091 [2024-11-20 10:59:05.182906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.091 [2024-11-20 10:59:05.182916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:16.091 [2024-11-20 10:59:05.182930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:16.091 [2024-11-20 10:59:05.182939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:16.091 [2024-11-20 10:59:05.182960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:16.091 [2024-11-20 10:59:05.182970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:16.091 [2024-11-20 10:59:05.182983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.091 [2024-11-20 10:59:05.182993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:16.091 [2024-11-20 10:59:05.183006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:16.091 [2024-11-20 10:59:05.183016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:16.091 [2024-11-20 10:59:05.183030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:16.091 [2024-11-20 10:59:05.183039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:16.091 [2024-11-20 10:59:05.183052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.091 [2024-11-20 10:59:05.183062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:16.091 [2024-11-20 10:59:05.183075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:16.091 [2024-11-20 10:59:05.183085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.091 [2024-11-20 10:59:05.183098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:16.091 [2024-11-20 10:59:05.183118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:16.091 [2024-11-20 10:59:05.183132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.091 [2024-11-20 10:59:05.183142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:16.091 [2024-11-20 10:59:05.183159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:16.091 [2024-11-20 10:59:05.183169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.091 [2024-11-20 10:59:05.183182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:16.091 [2024-11-20 10:59:05.183191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:16.091 [2024-11-20 10:59:05.183204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.091 [2024-11-20 10:59:05.183214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:16.091 [2024-11-20 10:59:05.183227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:16.091 [2024-11-20 10:59:05.183236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:16.092 [2024-11-20 10:59:05.183249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:16.092 [2024-11-20 10:59:05.183258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:16.092 [2024-11-20 10:59:05.183273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.092 [2024-11-20 10:59:05.183282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:16.092 [2024-11-20 10:59:05.183296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:16.092 [2024-11-20 10:59:05.183305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:16.092 [2024-11-20 10:59:05.183318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:16.092 [2024-11-20 10:59:05.183327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:16.092 [2024-11-20 10:59:05.183345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.092 [2024-11-20 10:59:05.183354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:16.092 [2024-11-20 10:59:05.183367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:16.092 [2024-11-20 10:59:05.183377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.092 [2024-11-20 10:59:05.183390] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:16.092 [2024-11-20 10:59:05.183403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:16.092 [2024-11-20 10:59:05.183422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:16.092 [2024-11-20 10:59:05.183431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:16.092 [2024-11-20 10:59:05.183446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:16.092 [2024-11-20 10:59:05.183456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:16.092 [2024-11-20 10:59:05.183469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:16.092 [2024-11-20 10:59:05.183479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:16.092 [2024-11-20 10:59:05.183492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:16.092 [2024-11-20 10:59:05.183501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:16.092 [2024-11-20 10:59:05.183516] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:16.092 [2024-11-20 10:59:05.183529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.092 [2024-11-20 10:59:05.183548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:16.092 [2024-11-20 10:59:05.183559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:16.092 [2024-11-20 10:59:05.183575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:16.092 [2024-11-20 10:59:05.183585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:16.092 [2024-11-20 10:59:05.183610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:16.092 [2024-11-20 10:59:05.183621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:16.092 [2024-11-20 10:59:05.183636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:16.092 [2024-11-20 10:59:05.183646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:16.092 [2024-11-20 10:59:05.183660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:16.092 [2024-11-20 10:59:05.183670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:16.092 [2024-11-20 10:59:05.183685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:16.092 [2024-11-20 10:59:05.183695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:16.092 [2024-11-20 10:59:05.183709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:16.092 [2024-11-20 10:59:05.183719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:16.092 [2024-11-20 10:59:05.183734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:16.092 [2024-11-20 10:59:05.183745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:16.092 [2024-11-20 10:59:05.183766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:16.092 [2024-11-20 10:59:05.183776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:16.092 [2024-11-20 10:59:05.183791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:16.092 [2024-11-20 10:59:05.183801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:16.092 [2024-11-20 10:59:05.183827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.183837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:16.092 [2024-11-20 10:59:05.183851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:22:16.092 [2024-11-20 10:59:05.183861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.221363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.221497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.092 [2024-11-20 10:59:05.221688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.497 ms 00:22:16.092 [2024-11-20 10:59:05.221731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.221879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.222027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:16.092 [2024-11-20 10:59:05.222120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:22:16.092 [2024-11-20 10:59:05.222151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.268945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.269083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:16.092 [2024-11-20 10:59:05.269202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.817 ms 00:22:16.092 [2024-11-20 10:59:05.269242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.269358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.269397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.092 [2024-11-20 10:59:05.269493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:16.092 [2024-11-20 10:59:05.269531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.270019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.270061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.092 [2024-11-20 10:59:05.270228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:22:16.092 [2024-11-20 10:59:05.270265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.270412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.270450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.092 [2024-11-20 10:59:05.270589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:16.092 [2024-11-20 10:59:05.270638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.292427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.292566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.092 [2024-11-20 10:59:05.292741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.771 ms 00:22:16.092 [2024-11-20 10:59:05.292781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.311741] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:16.092 [2024-11-20 10:59:05.311906] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:16.092 [2024-11-20 10:59:05.312026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.312062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:16.092 [2024-11-20 10:59:05.312100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.137 ms 00:22:16.092 [2024-11-20 10:59:05.312133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.092 [2024-11-20 10:59:05.340024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.092 [2024-11-20 10:59:05.340179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:16.092 [2024-11-20 10:59:05.340283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.833 ms 00:22:16.092 [2024-11-20 10:59:05.340322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.351 [2024-11-20 10:59:05.357402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.351 [2024-11-20 10:59:05.357555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:16.351 [2024-11-20 10:59:05.357687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.983 ms 00:22:16.351 [2024-11-20 10:59:05.357727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.351 [2024-11-20 10:59:05.375185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.351 [2024-11-20 10:59:05.375304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:16.351 [2024-11-20 10:59:05.375406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.375 ms 00:22:16.351 [2024-11-20 10:59:05.375442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.351 [2024-11-20 10:59:05.376278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.351 [2024-11-20 10:59:05.376390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:16.351 [2024-11-20 10:59:05.376418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:22:16.351 [2024-11-20 10:59:05.376429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.351 [2024-11-20 10:59:05.480844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.351 [2024-11-20 10:59:05.480903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:16.351 [2024-11-20 10:59:05.480925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.544 ms 00:22:16.351 [2024-11-20 10:59:05.480952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.351 [2024-11-20 10:59:05.491040] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:16.351 [2024-11-20 10:59:05.506172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.351 [2024-11-20 10:59:05.506230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:16.351 [2024-11-20 10:59:05.506250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.145 ms 00:22:16.351 [2024-11-20 10:59:05.506264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.351 [2024-11-20 10:59:05.506350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.351 [2024-11-20 10:59:05.506367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:16.351 [2024-11-20 10:59:05.506378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:16.351 [2024-11-20 10:59:05.506392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.351 [2024-11-20 10:59:05.506442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.351 [2024-11-20 10:59:05.506458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:16.351 [2024-11-20 10:59:05.506469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:16.351 [2024-11-20 10:59:05.506483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.351 [2024-11-20 10:59:05.506529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.351 [2024-11-20 10:59:05.506545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:16.351 [2024-11-20 10:59:05.506556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:16.351 [2024-11-20 10:59:05.506572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.352 [2024-11-20 10:59:05.506648] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:16.352 [2024-11-20 10:59:05.506671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.352 [2024-11-20 10:59:05.506681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:16.352 [2024-11-20 10:59:05.506703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:16.352 [2024-11-20 10:59:05.506712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.352 [2024-11-20 10:59:05.541447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.352 [2024-11-20 10:59:05.541484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:16.352 [2024-11-20 10:59:05.541503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.752 ms 00:22:16.352 [2024-11-20 10:59:05.541513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.352 [2024-11-20 10:59:05.541672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.352 [2024-11-20 10:59:05.541687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:16.352 [2024-11-20 10:59:05.541704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:16.352 [2024-11-20 10:59:05.541719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.352 [2024-11-20 10:59:05.542749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:16.352 [2024-11-20 10:59:05.546810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.903 ms, result 0 00:22:16.352 [2024-11-20 10:59:05.548003] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:16.352 Some configs were skipped because the RPC state that can call them passed over. 00:22:16.352 10:59:05 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:16.610 [2024-11-20 10:59:05.787291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.610 [2024-11-20 10:59:05.787462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:16.610 [2024-11-20 10:59:05.787486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:22:16.610 [2024-11-20 10:59:05.787501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.610 [2024-11-20 10:59:05.787547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.868 ms, result 0 00:22:16.610 true 00:22:16.610 10:59:05 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:16.868 [2024-11-20 10:59:05.986890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.868 [2024-11-20 10:59:05.986936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:16.868 [2024-11-20 10:59:05.986953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:22:16.869 [2024-11-20 10:59:05.986963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.869 [2024-11-20 10:59:05.987016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.490 ms, result 0 00:22:16.869 true 00:22:16.869 10:59:06 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78461 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78461 ']' 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78461 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78461 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78461' 00:22:16.869 killing process with pid 78461 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78461 00:22:16.869 10:59:06 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78461 00:22:18.246 [2024-11-20 10:59:07.111070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.111130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:18.246 [2024-11-20 10:59:07.111145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:18.246 [2024-11-20 10:59:07.111173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.111196] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:18.246 [2024-11-20 10:59:07.115398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.115433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:18.246 [2024-11-20 10:59:07.115449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.189 ms 00:22:18.246 [2024-11-20 10:59:07.115459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.115735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.115748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:18.246 [2024-11-20 10:59:07.115760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:22:18.246 [2024-11-20 10:59:07.115770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.119012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.119050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:18.246 [2024-11-20 10:59:07.119067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.225 ms 00:22:18.246 [2024-11-20 10:59:07.119077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.124466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.124499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:18.246 [2024-11-20 10:59:07.124512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.359 ms 00:22:18.246 [2024-11-20 10:59:07.124521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.138989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.139023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:18.246 [2024-11-20 10:59:07.139040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.434 ms 00:22:18.246 [2024-11-20 10:59:07.139075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.148981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.149017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:18.246 [2024-11-20 10:59:07.149033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.850 ms 00:22:18.246 [2024-11-20 10:59:07.149059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.149198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.149211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:18.246 [2024-11-20 10:59:07.149223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:18.246 [2024-11-20 10:59:07.149232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.164169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.164304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:18.246 [2024-11-20 10:59:07.164344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.939 ms 00:22:18.246 [2024-11-20 10:59:07.164354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.179213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.179375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:18.246 [2024-11-20 10:59:07.179404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.807 ms 00:22:18.246 [2024-11-20 10:59:07.179413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.193412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.193537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:18.246 [2024-11-20 10:59:07.193580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.967 ms 00:22:18.246 [2024-11-20 10:59:07.193589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.207883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.246 [2024-11-20 10:59:07.208007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:18.246 [2024-11-20 10:59:07.208047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.192 ms 00:22:18.246 [2024-11-20 10:59:07.208056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.246 [2024-11-20 10:59:07.208145] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:18.246 [2024-11-20 10:59:07.208161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:18.246 [2024-11-20 10:59:07.208176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:18.246 [2024-11-20 10:59:07.208186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:18.246 [2024-11-20 10:59:07.208199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:18.246 [2024-11-20 10:59:07.208209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.208999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:18.247 [2024-11-20 10:59:07.209395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:18.248 [2024-11-20 10:59:07.209523] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:18.248 [2024-11-20 10:59:07.209548] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b8e0acb-4646-4966-b129-dade0fcf8fcb 00:22:18.248 [2024-11-20 10:59:07.209570] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:18.248 [2024-11-20 10:59:07.209591] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:18.248 [2024-11-20 10:59:07.209610] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:18.248 [2024-11-20 10:59:07.209624] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:18.248 [2024-11-20 10:59:07.209634] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:18.248 [2024-11-20 10:59:07.209648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:18.248 [2024-11-20 10:59:07.209658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:18.248 [2024-11-20 10:59:07.209672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:18.248 [2024-11-20 10:59:07.209681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:18.248 [2024-11-20 10:59:07.209695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.248 [2024-11-20 10:59:07.209705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:18.248 [2024-11-20 10:59:07.209719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:22:18.248 [2024-11-20 10:59:07.209729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.248 [2024-11-20 10:59:07.228733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.248 [2024-11-20 10:59:07.228765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:18.248 [2024-11-20 10:59:07.228788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.999 ms 00:22:18.248 [2024-11-20 10:59:07.228797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.248 [2024-11-20 10:59:07.229371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.248 [2024-11-20 10:59:07.229386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:18.248 [2024-11-20 10:59:07.229401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:22:18.248 [2024-11-20 10:59:07.229416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.248 [2024-11-20 10:59:07.295185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.248 [2024-11-20 10:59:07.295219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.248 [2024-11-20 10:59:07.295236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.248 [2024-11-20 10:59:07.295263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.248 [2024-11-20 10:59:07.295345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.248 [2024-11-20 10:59:07.295358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.248 [2024-11-20 10:59:07.295372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.248 [2024-11-20 10:59:07.295387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.248 [2024-11-20 10:59:07.295438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.248 [2024-11-20 10:59:07.295451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.248 [2024-11-20 10:59:07.295471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.248 [2024-11-20 10:59:07.295481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.248 [2024-11-20 10:59:07.295503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.248 [2024-11-20 10:59:07.295514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.248 [2024-11-20 10:59:07.295528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.248 [2024-11-20 10:59:07.295538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.248 [2024-11-20 10:59:07.416848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.248 [2024-11-20 10:59:07.416892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.248 [2024-11-20 10:59:07.416911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.248 [2024-11-20 10:59:07.416938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.507 [2024-11-20 10:59:07.512641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.507 [2024-11-20 10:59:07.512685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.507 [2024-11-20 10:59:07.512705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.507 [2024-11-20 10:59:07.512720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.507 [2024-11-20 10:59:07.512837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.507 [2024-11-20 10:59:07.512850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.507 [2024-11-20 10:59:07.512870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.507 [2024-11-20 10:59:07.512880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.507 [2024-11-20 10:59:07.512924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.507 [2024-11-20 10:59:07.512935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.507 [2024-11-20 10:59:07.512950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.507 [2024-11-20 10:59:07.512976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.507 [2024-11-20 10:59:07.513096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.507 [2024-11-20 10:59:07.513113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.507 [2024-11-20 10:59:07.513129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.507 [2024-11-20 10:59:07.513139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.507 [2024-11-20 10:59:07.513184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.507 [2024-11-20 10:59:07.513196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:18.507 [2024-11-20 10:59:07.513211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.507 [2024-11-20 10:59:07.513221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.507 [2024-11-20 10:59:07.513265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.507 [2024-11-20 10:59:07.513281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.507 [2024-11-20 10:59:07.513301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.507 [2024-11-20 10:59:07.513311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.507 [2024-11-20 10:59:07.513359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.507 [2024-11-20 10:59:07.513371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.507 [2024-11-20 10:59:07.513386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.507 [2024-11-20 10:59:07.513396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.507 [2024-11-20 10:59:07.513539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 403.089 ms, result 0 00:22:19.444 10:59:08 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:19.444 [2024-11-20 10:59:08.552136] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:22:19.444 [2024-11-20 10:59:08.552262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78525 ] 00:22:19.702 [2024-11-20 10:59:08.732349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.702 [2024-11-20 10:59:08.842300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.960 [2024-11-20 10:59:09.170921] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:19.960 [2024-11-20 10:59:09.171224] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:20.219 [2024-11-20 10:59:09.331615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.331823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:20.219 [2024-11-20 10:59:09.331847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:20.219 [2024-11-20 10:59:09.331858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.335026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.335192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:20.219 [2024-11-20 10:59:09.335213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.144 ms 00:22:20.219 [2024-11-20 10:59:09.335224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.335373] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:20.219 [2024-11-20 10:59:09.336475] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:20.219 [2024-11-20 10:59:09.336502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.336512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:20.219 [2024-11-20 10:59:09.336523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:22:20.219 [2024-11-20 10:59:09.336533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.338111] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:20.219 [2024-11-20 10:59:09.356124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.356164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:20.219 [2024-11-20 10:59:09.356178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.042 ms 00:22:20.219 [2024-11-20 10:59:09.356188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.356278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.356292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:20.219 [2024-11-20 10:59:09.356302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:20.219 [2024-11-20 10:59:09.356312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.362914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.362941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:20.219 [2024-11-20 10:59:09.362953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.574 ms 00:22:20.219 [2024-11-20 10:59:09.362962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.363052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.363066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:20.219 [2024-11-20 10:59:09.363076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:20.219 [2024-11-20 10:59:09.363085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.363111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.363125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:20.219 [2024-11-20 10:59:09.363135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:20.219 [2024-11-20 10:59:09.363143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.363163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:20.219 [2024-11-20 10:59:09.368118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.368148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:20.219 [2024-11-20 10:59:09.368160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.967 ms 00:22:20.219 [2024-11-20 10:59:09.368185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.368250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.368262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:20.219 [2024-11-20 10:59:09.368272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:20.219 [2024-11-20 10:59:09.368282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.368301] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:20.219 [2024-11-20 10:59:09.368326] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:20.219 [2024-11-20 10:59:09.368359] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:20.219 [2024-11-20 10:59:09.368377] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:20.219 [2024-11-20 10:59:09.368463] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:20.219 [2024-11-20 10:59:09.368475] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:20.219 [2024-11-20 10:59:09.368488] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:20.219 [2024-11-20 10:59:09.368501] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:20.219 [2024-11-20 10:59:09.368516] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:20.219 [2024-11-20 10:59:09.368526] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:20.219 [2024-11-20 10:59:09.368536] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:20.219 [2024-11-20 10:59:09.368545] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:20.219 [2024-11-20 10:59:09.368554] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:20.219 [2024-11-20 10:59:09.368564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.368574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:20.219 [2024-11-20 10:59:09.368584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:22:20.219 [2024-11-20 10:59:09.368593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.368686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.219 [2024-11-20 10:59:09.368698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:20.219 [2024-11-20 10:59:09.368712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:20.219 [2024-11-20 10:59:09.368722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.219 [2024-11-20 10:59:09.368835] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:20.219 [2024-11-20 10:59:09.368847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:20.219 [2024-11-20 10:59:09.368858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:20.219 [2024-11-20 10:59:09.368868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.219 [2024-11-20 10:59:09.368878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:20.219 [2024-11-20 10:59:09.368887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:20.219 [2024-11-20 10:59:09.368896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:20.219 [2024-11-20 10:59:09.368905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:20.219 [2024-11-20 10:59:09.368914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:20.219 [2024-11-20 10:59:09.368922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:20.219 [2024-11-20 10:59:09.368932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:20.219 [2024-11-20 10:59:09.368942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:20.219 [2024-11-20 10:59:09.368951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:20.219 [2024-11-20 10:59:09.368971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:20.219 [2024-11-20 10:59:09.368980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:20.219 [2024-11-20 10:59:09.368989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.219 [2024-11-20 10:59:09.368998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:20.219 [2024-11-20 10:59:09.369007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:20.219 [2024-11-20 10:59:09.369016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.219 [2024-11-20 10:59:09.369025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:20.219 [2024-11-20 10:59:09.369033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:20.219 [2024-11-20 10:59:09.369042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.219 [2024-11-20 10:59:09.369051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:20.219 [2024-11-20 10:59:09.369059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:20.219 [2024-11-20 10:59:09.369068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.219 [2024-11-20 10:59:09.369077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:20.219 [2024-11-20 10:59:09.369086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:20.220 [2024-11-20 10:59:09.369094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.220 [2024-11-20 10:59:09.369119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:20.220 [2024-11-20 10:59:09.369128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:20.220 [2024-11-20 10:59:09.369137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:20.220 [2024-11-20 10:59:09.369145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:20.220 [2024-11-20 10:59:09.369154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:20.220 [2024-11-20 10:59:09.369163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:20.220 [2024-11-20 10:59:09.369172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:20.220 [2024-11-20 10:59:09.369181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:20.220 [2024-11-20 10:59:09.369189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:20.220 [2024-11-20 10:59:09.369198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:20.220 [2024-11-20 10:59:09.369207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:20.220 [2024-11-20 10:59:09.369215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.220 [2024-11-20 10:59:09.369224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:20.220 [2024-11-20 10:59:09.369233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:20.220 [2024-11-20 10:59:09.369243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.220 [2024-11-20 10:59:09.369252] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:20.220 [2024-11-20 10:59:09.369261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:20.220 [2024-11-20 10:59:09.369271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:20.220 [2024-11-20 10:59:09.369284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:20.220 [2024-11-20 10:59:09.369294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:20.220 [2024-11-20 10:59:09.369303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:20.220 [2024-11-20 10:59:09.369312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:20.220 [2024-11-20 10:59:09.369321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:20.220 [2024-11-20 10:59:09.369330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:20.220 [2024-11-20 10:59:09.369339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:20.220 [2024-11-20 10:59:09.369350] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:20.220 [2024-11-20 10:59:09.369361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.220 [2024-11-20 10:59:09.369372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:20.220 [2024-11-20 10:59:09.369382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:20.220 [2024-11-20 10:59:09.369392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:20.220 [2024-11-20 10:59:09.369402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:20.220 [2024-11-20 10:59:09.369413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:20.220 [2024-11-20 10:59:09.369422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:20.220 [2024-11-20 10:59:09.369432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:20.220 [2024-11-20 10:59:09.369442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:20.220 [2024-11-20 10:59:09.369453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:20.220 [2024-11-20 10:59:09.369463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:20.220 [2024-11-20 10:59:09.369473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:20.220 [2024-11-20 10:59:09.369482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:20.220 [2024-11-20 10:59:09.369492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:20.220 [2024-11-20 10:59:09.369502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:20.220 [2024-11-20 10:59:09.369512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:20.220 [2024-11-20 10:59:09.369523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.220 [2024-11-20 10:59:09.369534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:20.220 [2024-11-20 10:59:09.369544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:20.220 [2024-11-20 10:59:09.369554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:20.220 [2024-11-20 10:59:09.369564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:20.220 [2024-11-20 10:59:09.369575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.220 [2024-11-20 10:59:09.369585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:20.220 [2024-11-20 10:59:09.369598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:22:20.220 [2024-11-20 10:59:09.369608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.220 [2024-11-20 10:59:09.407715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.220 [2024-11-20 10:59:09.407751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:20.220 [2024-11-20 10:59:09.407765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.106 ms 00:22:20.220 [2024-11-20 10:59:09.407790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.220 [2024-11-20 10:59:09.407909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.220 [2024-11-20 10:59:09.407925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:20.220 [2024-11-20 10:59:09.407936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:20.220 [2024-11-20 10:59:09.407945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.480003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.480039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:20.479 [2024-11-20 10:59:09.480052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.153 ms 00:22:20.479 [2024-11-20 10:59:09.480066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.480155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.480167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:20.479 [2024-11-20 10:59:09.480178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:20.479 [2024-11-20 10:59:09.480187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.480647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.480662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:20.479 [2024-11-20 10:59:09.480673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:22:20.479 [2024-11-20 10:59:09.480688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.480820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.480833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:20.479 [2024-11-20 10:59:09.480844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:22:20.479 [2024-11-20 10:59:09.480854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.499754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.499789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:20.479 [2024-11-20 10:59:09.499802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.908 ms 00:22:20.479 [2024-11-20 10:59:09.499812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.518250] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:20.479 [2024-11-20 10:59:09.518399] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:20.479 [2024-11-20 10:59:09.518434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.518445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:20.479 [2024-11-20 10:59:09.518457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.554 ms 00:22:20.479 [2024-11-20 10:59:09.518467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.546267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.546314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:20.479 [2024-11-20 10:59:09.546327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.686 ms 00:22:20.479 [2024-11-20 10:59:09.546337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.563249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.563286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:20.479 [2024-11-20 10:59:09.563299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.865 ms 00:22:20.479 [2024-11-20 10:59:09.563308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.580287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.580320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:20.479 [2024-11-20 10:59:09.580333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.932 ms 00:22:20.479 [2024-11-20 10:59:09.580342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.581092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.581119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:20.479 [2024-11-20 10:59:09.581131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:22:20.479 [2024-11-20 10:59:09.581141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.661524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.661579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:20.479 [2024-11-20 10:59:09.661608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.484 ms 00:22:20.479 [2024-11-20 10:59:09.661619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.672727] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:20.479 [2024-11-20 10:59:09.688866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.688912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:20.479 [2024-11-20 10:59:09.688928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.175 ms 00:22:20.479 [2024-11-20 10:59:09.688955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.689084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.689098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:20.479 [2024-11-20 10:59:09.689109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:20.479 [2024-11-20 10:59:09.689119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.689172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.689183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:20.479 [2024-11-20 10:59:09.689194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:20.479 [2024-11-20 10:59:09.689203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.689229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.689243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:20.479 [2024-11-20 10:59:09.689254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:20.479 [2024-11-20 10:59:09.689263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.689299] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:20.479 [2024-11-20 10:59:09.689311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.689321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:20.479 [2024-11-20 10:59:09.689331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:20.479 [2024-11-20 10:59:09.689341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.725051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.725089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:20.479 [2024-11-20 10:59:09.725102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.748 ms 00:22:20.479 [2024-11-20 10:59:09.725128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.479 [2024-11-20 10:59:09.725244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:20.479 [2024-11-20 10:59:09.725258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:20.479 [2024-11-20 10:59:09.725269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:20.479 [2024-11-20 10:59:09.725279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:20.480 [2024-11-20 10:59:09.726230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:20.738 [2024-11-20 10:59:09.730319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 394.977 ms, result 0 00:22:20.738 [2024-11-20 10:59:09.731258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:20.738 [2024-11-20 10:59:09.749608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.671  [2024-11-20T10:59:11.859Z] Copying: 28/256 [MB] (28 MBps) [2024-11-20T10:59:13.235Z] Copying: 52/256 [MB] (24 MBps) [2024-11-20T10:59:14.171Z] Copying: 76/256 [MB] (24 MBps) [2024-11-20T10:59:15.106Z] Copying: 101/256 [MB] (24 MBps) [2024-11-20T10:59:16.042Z] Copying: 125/256 [MB] (24 MBps) [2024-11-20T10:59:16.978Z] Copying: 150/256 [MB] (24 MBps) [2024-11-20T10:59:17.913Z] Copying: 174/256 [MB] (24 MBps) [2024-11-20T10:59:18.855Z] Copying: 199/256 [MB] (24 MBps) [2024-11-20T10:59:20.233Z] Copying: 224/256 [MB] (24 MBps) [2024-11-20T10:59:20.233Z] Copying: 249/256 [MB] (25 MBps) [2024-11-20T10:59:20.492Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-20 10:59:20.410585] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:31.239 [2024-11-20 10:59:20.425991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.239 [2024-11-20 10:59:20.426034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:31.239 [2024-11-20 10:59:20.426050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:31.239 [2024-11-20 10:59:20.426068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.239 [2024-11-20 10:59:20.426096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:31.239 [2024-11-20 10:59:20.430304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.239 [2024-11-20 10:59:20.430334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:31.239 [2024-11-20 10:59:20.430347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.198 ms 00:22:31.239 [2024-11-20 10:59:20.430357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.239 [2024-11-20 10:59:20.430647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.239 [2024-11-20 10:59:20.430665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:31.239 [2024-11-20 10:59:20.430676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:22:31.239 [2024-11-20 10:59:20.430686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.239 [2024-11-20 10:59:20.433707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.239 [2024-11-20 10:59:20.433736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:31.239 [2024-11-20 10:59:20.433747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:22:31.239 [2024-11-20 10:59:20.433757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.239 [2024-11-20 10:59:20.439609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.239 [2024-11-20 10:59:20.439667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:31.239 [2024-11-20 10:59:20.439680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.838 ms 00:22:31.239 [2024-11-20 10:59:20.439690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.239 [2024-11-20 10:59:20.477758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.239 [2024-11-20 10:59:20.477798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:31.239 [2024-11-20 10:59:20.477812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.058 ms 00:22:31.239 [2024-11-20 10:59:20.477822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.499 [2024-11-20 10:59:20.499434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.499 [2024-11-20 10:59:20.499478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:31.499 [2024-11-20 10:59:20.499492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.585 ms 00:22:31.499 [2024-11-20 10:59:20.499505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.499 [2024-11-20 10:59:20.499668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.499 [2024-11-20 10:59:20.499682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:31.499 [2024-11-20 10:59:20.499693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:31.499 [2024-11-20 10:59:20.499702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.499 [2024-11-20 10:59:20.534478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.499 [2024-11-20 10:59:20.534519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:31.499 [2024-11-20 10:59:20.534531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.802 ms 00:22:31.499 [2024-11-20 10:59:20.534541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.499 [2024-11-20 10:59:20.568240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.499 [2024-11-20 10:59:20.568275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:31.499 [2024-11-20 10:59:20.568287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.669 ms 00:22:31.499 [2024-11-20 10:59:20.568312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.499 [2024-11-20 10:59:20.602075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.499 [2024-11-20 10:59:20.602109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:31.499 [2024-11-20 10:59:20.602121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.764 ms 00:22:31.499 [2024-11-20 10:59:20.602130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.499 [2024-11-20 10:59:20.635966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.499 [2024-11-20 10:59:20.636107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:31.499 [2024-11-20 10:59:20.636143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.795 ms 00:22:31.499 [2024-11-20 10:59:20.636153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.499 [2024-11-20 10:59:20.636228] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:31.499 [2024-11-20 10:59:20.636247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:31.499 [2024-11-20 10:59:20.636394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.636998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:31.500 [2024-11-20 10:59:20.637322] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:31.500 [2024-11-20 10:59:20.637332] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4b8e0acb-4646-4966-b129-dade0fcf8fcb 00:22:31.500 [2024-11-20 10:59:20.637342] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:31.500 [2024-11-20 10:59:20.637352] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:31.501 [2024-11-20 10:59:20.637361] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:31.501 [2024-11-20 10:59:20.637371] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:31.501 [2024-11-20 10:59:20.637381] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:31.501 [2024-11-20 10:59:20.637391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:31.501 [2024-11-20 10:59:20.637401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:31.501 [2024-11-20 10:59:20.637409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:31.501 [2024-11-20 10:59:20.637418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:31.501 [2024-11-20 10:59:20.637428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.501 [2024-11-20 10:59:20.637442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:31.501 [2024-11-20 10:59:20.637452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:22:31.501 [2024-11-20 10:59:20.637462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.501 [2024-11-20 10:59:20.656309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.501 [2024-11-20 10:59:20.656341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:31.501 [2024-11-20 10:59:20.656353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.858 ms 00:22:31.501 [2024-11-20 10:59:20.656362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.501 [2024-11-20 10:59:20.656957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.501 [2024-11-20 10:59:20.656979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:31.501 [2024-11-20 10:59:20.656990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:22:31.501 [2024-11-20 10:59:20.657000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.501 [2024-11-20 10:59:20.708955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.501 [2024-11-20 10:59:20.708988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.501 [2024-11-20 10:59:20.709000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.501 [2024-11-20 10:59:20.709025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.501 [2024-11-20 10:59:20.709100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.501 [2024-11-20 10:59:20.709111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.501 [2024-11-20 10:59:20.709121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.501 [2024-11-20 10:59:20.709131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.501 [2024-11-20 10:59:20.709176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.501 [2024-11-20 10:59:20.709188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.501 [2024-11-20 10:59:20.709206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.501 [2024-11-20 10:59:20.709216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.501 [2024-11-20 10:59:20.709234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.501 [2024-11-20 10:59:20.709248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.501 [2024-11-20 10:59:20.709258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.501 [2024-11-20 10:59:20.709268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.824551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.760 [2024-11-20 10:59:20.824610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.760 [2024-11-20 10:59:20.824624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.760 [2024-11-20 10:59:20.824634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.919768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.760 [2024-11-20 10:59:20.919968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.760 [2024-11-20 10:59:20.919988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.760 [2024-11-20 10:59:20.920000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.920060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.760 [2024-11-20 10:59:20.920071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:31.760 [2024-11-20 10:59:20.920081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.760 [2024-11-20 10:59:20.920090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.920117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.760 [2024-11-20 10:59:20.920128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:31.760 [2024-11-20 10:59:20.920144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.760 [2024-11-20 10:59:20.920153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.920251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.760 [2024-11-20 10:59:20.920264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:31.760 [2024-11-20 10:59:20.920274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.760 [2024-11-20 10:59:20.920283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.920318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.760 [2024-11-20 10:59:20.920330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:31.760 [2024-11-20 10:59:20.920339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.760 [2024-11-20 10:59:20.920353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.920390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.760 [2024-11-20 10:59:20.920401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:31.760 [2024-11-20 10:59:20.920411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.760 [2024-11-20 10:59:20.920421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.920461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:31.760 [2024-11-20 10:59:20.920472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:31.760 [2024-11-20 10:59:20.920486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:31.760 [2024-11-20 10:59:20.920495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.760 [2024-11-20 10:59:20.920693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 495.446 ms, result 0 00:22:32.697 00:22:32.697 00:22:32.697 10:59:21 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:33.265 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:33.265 10:59:22 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:33.265 10:59:22 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:33.265 10:59:22 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:33.265 10:59:22 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:33.265 10:59:22 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:33.265 10:59:22 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:33.265 10:59:22 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78461 00:22:33.265 10:59:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78461 ']' 00:22:33.265 10:59:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78461 00:22:33.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78461) - No such process 00:22:33.265 Process with pid 78461 is not found 00:22:33.265 10:59:22 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78461 is not found' 00:22:33.265 ************************************ 00:22:33.265 END TEST ftl_trim 00:22:33.265 ************************************ 00:22:33.265 00:22:33.265 real 1m8.259s 00:22:33.265 user 1m31.743s 00:22:33.265 sys 0m6.492s 00:22:33.265 10:59:22 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.265 10:59:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:33.525 10:59:22 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:33.525 10:59:22 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:33.525 10:59:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.525 10:59:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:33.525 ************************************ 00:22:33.525 START TEST ftl_restore 00:22:33.525 ************************************ 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:33.525 * Looking for test storage... 00:22:33.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.525 10:59:22 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.525 --rc genhtml_branch_coverage=1 00:22:33.525 --rc genhtml_function_coverage=1 00:22:33.525 --rc genhtml_legend=1 00:22:33.525 --rc geninfo_all_blocks=1 00:22:33.525 --rc geninfo_unexecuted_blocks=1 00:22:33.525 00:22:33.525 ' 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.525 --rc genhtml_branch_coverage=1 00:22:33.525 --rc genhtml_function_coverage=1 00:22:33.525 --rc genhtml_legend=1 00:22:33.525 --rc geninfo_all_blocks=1 00:22:33.525 --rc geninfo_unexecuted_blocks=1 00:22:33.525 00:22:33.525 ' 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.525 --rc genhtml_branch_coverage=1 00:22:33.525 --rc genhtml_function_coverage=1 00:22:33.525 --rc genhtml_legend=1 00:22:33.525 --rc geninfo_all_blocks=1 00:22:33.525 --rc geninfo_unexecuted_blocks=1 00:22:33.525 00:22:33.525 ' 00:22:33.525 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.525 --rc genhtml_branch_coverage=1 00:22:33.525 --rc genhtml_function_coverage=1 00:22:33.525 --rc genhtml_legend=1 00:22:33.525 --rc geninfo_all_blocks=1 00:22:33.525 --rc geninfo_unexecuted_blocks=1 00:22:33.525 00:22:33.525 ' 00:22:33.525 10:59:22 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:33.525 10:59:22 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:33.525 10:59:22 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.u5njbdVB6I 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:33.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78739 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:33.787 10:59:22 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78739 00:22:33.787 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78739 ']' 00:22:33.787 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.787 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.787 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.787 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.787 10:59:22 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:33.787 [2024-11-20 10:59:22.916839] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:22:33.787 [2024-11-20 10:59:22.916973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78739 ] 00:22:34.045 [2024-11-20 10:59:23.095774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.045 [2024-11-20 10:59:23.201678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.981 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.981 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:34.981 10:59:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:34.981 10:59:24 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:34.981 10:59:24 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:34.981 10:59:24 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:34.981 10:59:24 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:34.981 10:59:24 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:35.240 10:59:24 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:35.240 10:59:24 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:35.240 10:59:24 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:35.240 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:35.240 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:35.240 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:35.240 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:35.240 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:35.499 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:35.499 { 00:22:35.499 "name": "nvme0n1", 00:22:35.499 "aliases": [ 00:22:35.499 "fe16848b-fd77-4384-8844-c7b3d6f80ca4" 00:22:35.499 ], 00:22:35.499 "product_name": "NVMe disk", 00:22:35.499 "block_size": 4096, 00:22:35.499 "num_blocks": 1310720, 00:22:35.499 "uuid": "fe16848b-fd77-4384-8844-c7b3d6f80ca4", 00:22:35.499 "numa_id": -1, 00:22:35.499 "assigned_rate_limits": { 00:22:35.499 "rw_ios_per_sec": 0, 00:22:35.499 "rw_mbytes_per_sec": 0, 00:22:35.499 "r_mbytes_per_sec": 0, 00:22:35.499 "w_mbytes_per_sec": 0 00:22:35.499 }, 00:22:35.499 "claimed": true, 00:22:35.499 "claim_type": "read_many_write_one", 00:22:35.499 "zoned": false, 00:22:35.499 "supported_io_types": { 00:22:35.499 "read": true, 00:22:35.499 "write": true, 00:22:35.499 "unmap": true, 00:22:35.499 "flush": true, 00:22:35.499 "reset": true, 00:22:35.499 "nvme_admin": true, 00:22:35.499 "nvme_io": true, 00:22:35.499 "nvme_io_md": false, 00:22:35.499 "write_zeroes": true, 00:22:35.499 "zcopy": false, 00:22:35.499 "get_zone_info": false, 00:22:35.499 "zone_management": false, 00:22:35.499 "zone_append": false, 00:22:35.499 "compare": true, 00:22:35.499 "compare_and_write": false, 00:22:35.499 "abort": true, 00:22:35.499 "seek_hole": false, 00:22:35.499 "seek_data": false, 00:22:35.499 "copy": true, 00:22:35.499 "nvme_iov_md": false 00:22:35.499 }, 00:22:35.499 "driver_specific": { 00:22:35.499 "nvme": [ 00:22:35.499 { 00:22:35.499 "pci_address": "0000:00:11.0", 00:22:35.499 "trid": { 00:22:35.499 "trtype": "PCIe", 00:22:35.499 "traddr": "0000:00:11.0" 00:22:35.499 }, 00:22:35.499 "ctrlr_data": { 00:22:35.499 "cntlid": 0, 00:22:35.499 "vendor_id": "0x1b36", 00:22:35.499 "model_number": "QEMU NVMe Ctrl", 00:22:35.499 "serial_number": "12341", 00:22:35.499 "firmware_revision": "8.0.0", 00:22:35.499 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:35.499 "oacs": { 00:22:35.499 "security": 0, 00:22:35.499 "format": 1, 00:22:35.499 "firmware": 0, 00:22:35.499 "ns_manage": 1 00:22:35.499 }, 00:22:35.499 "multi_ctrlr": false, 00:22:35.499 "ana_reporting": false 00:22:35.499 }, 00:22:35.499 "vs": { 00:22:35.499 "nvme_version": "1.4" 00:22:35.499 }, 00:22:35.499 "ns_data": { 00:22:35.499 "id": 1, 00:22:35.499 "can_share": false 00:22:35.499 } 00:22:35.499 } 00:22:35.499 ], 00:22:35.499 "mp_policy": "active_passive" 00:22:35.499 } 00:22:35.499 } 00:22:35.499 ]' 00:22:35.499 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:35.499 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:35.499 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:35.499 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:35.499 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:35.499 10:59:24 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:35.499 10:59:24 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:35.499 10:59:24 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:35.500 10:59:24 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:35.500 10:59:24 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:35.500 10:59:24 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:35.759 10:59:24 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=478a5d05-92b1-476d-ba91-47bab5336237 00:22:35.759 10:59:24 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:35.759 10:59:24 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 478a5d05-92b1-476d-ba91-47bab5336237 00:22:35.759 10:59:24 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:36.018 10:59:25 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=61bd11e1-2efb-45b1-a6e8-5b77514778f3 00:22:36.018 10:59:25 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 61bd11e1-2efb-45b1-a6e8-5b77514778f3 00:22:36.278 10:59:25 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:36.278 10:59:25 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:36.278 10:59:25 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:36.278 10:59:25 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:36.278 10:59:25 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:36.278 10:59:25 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:36.278 10:59:25 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:36.278 10:59:25 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:36.278 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:36.278 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:36.278 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:36.278 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:36.278 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:36.537 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:36.537 { 00:22:36.537 "name": "8ad2e6b9-ab59-46b3-b687-fbb00c452055", 00:22:36.537 "aliases": [ 00:22:36.537 "lvs/nvme0n1p0" 00:22:36.537 ], 00:22:36.537 "product_name": "Logical Volume", 00:22:36.537 "block_size": 4096, 00:22:36.537 "num_blocks": 26476544, 00:22:36.537 "uuid": "8ad2e6b9-ab59-46b3-b687-fbb00c452055", 00:22:36.537 "assigned_rate_limits": { 00:22:36.537 "rw_ios_per_sec": 0, 00:22:36.537 "rw_mbytes_per_sec": 0, 00:22:36.537 "r_mbytes_per_sec": 0, 00:22:36.537 "w_mbytes_per_sec": 0 00:22:36.537 }, 00:22:36.537 "claimed": false, 00:22:36.537 "zoned": false, 00:22:36.537 "supported_io_types": { 00:22:36.537 "read": true, 00:22:36.537 "write": true, 00:22:36.537 "unmap": true, 00:22:36.537 "flush": false, 00:22:36.537 "reset": true, 00:22:36.537 "nvme_admin": false, 00:22:36.537 "nvme_io": false, 00:22:36.537 "nvme_io_md": false, 00:22:36.537 "write_zeroes": true, 00:22:36.537 "zcopy": false, 00:22:36.537 "get_zone_info": false, 00:22:36.537 "zone_management": false, 00:22:36.537 "zone_append": false, 00:22:36.537 "compare": false, 00:22:36.537 "compare_and_write": false, 00:22:36.537 "abort": false, 00:22:36.537 "seek_hole": true, 00:22:36.537 "seek_data": true, 00:22:36.537 "copy": false, 00:22:36.537 "nvme_iov_md": false 00:22:36.537 }, 00:22:36.537 "driver_specific": { 00:22:36.537 "lvol": { 00:22:36.537 "lvol_store_uuid": "61bd11e1-2efb-45b1-a6e8-5b77514778f3", 00:22:36.537 "base_bdev": "nvme0n1", 00:22:36.537 "thin_provision": true, 00:22:36.537 "num_allocated_clusters": 0, 00:22:36.537 "snapshot": false, 00:22:36.538 "clone": false, 00:22:36.538 "esnap_clone": false 00:22:36.538 } 00:22:36.538 } 00:22:36.538 } 00:22:36.538 ]' 00:22:36.538 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:36.538 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:36.538 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:36.538 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:36.538 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:36.538 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:36.538 10:59:25 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:36.538 10:59:25 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:36.538 10:59:25 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:36.797 10:59:25 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:36.797 10:59:25 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:36.797 10:59:25 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:36.797 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:36.797 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:36.797 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:36.797 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:36.797 10:59:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:37.055 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:37.055 { 00:22:37.055 "name": "8ad2e6b9-ab59-46b3-b687-fbb00c452055", 00:22:37.055 "aliases": [ 00:22:37.055 "lvs/nvme0n1p0" 00:22:37.055 ], 00:22:37.055 "product_name": "Logical Volume", 00:22:37.056 "block_size": 4096, 00:22:37.056 "num_blocks": 26476544, 00:22:37.056 "uuid": "8ad2e6b9-ab59-46b3-b687-fbb00c452055", 00:22:37.056 "assigned_rate_limits": { 00:22:37.056 "rw_ios_per_sec": 0, 00:22:37.056 "rw_mbytes_per_sec": 0, 00:22:37.056 "r_mbytes_per_sec": 0, 00:22:37.056 "w_mbytes_per_sec": 0 00:22:37.056 }, 00:22:37.056 "claimed": false, 00:22:37.056 "zoned": false, 00:22:37.056 "supported_io_types": { 00:22:37.056 "read": true, 00:22:37.056 "write": true, 00:22:37.056 "unmap": true, 00:22:37.056 "flush": false, 00:22:37.056 "reset": true, 00:22:37.056 "nvme_admin": false, 00:22:37.056 "nvme_io": false, 00:22:37.056 "nvme_io_md": false, 00:22:37.056 "write_zeroes": true, 00:22:37.056 "zcopy": false, 00:22:37.056 "get_zone_info": false, 00:22:37.056 "zone_management": false, 00:22:37.056 "zone_append": false, 00:22:37.056 "compare": false, 00:22:37.056 "compare_and_write": false, 00:22:37.056 "abort": false, 00:22:37.056 "seek_hole": true, 00:22:37.056 "seek_data": true, 00:22:37.056 "copy": false, 00:22:37.056 "nvme_iov_md": false 00:22:37.056 }, 00:22:37.056 "driver_specific": { 00:22:37.056 "lvol": { 00:22:37.056 "lvol_store_uuid": "61bd11e1-2efb-45b1-a6e8-5b77514778f3", 00:22:37.056 "base_bdev": "nvme0n1", 00:22:37.056 "thin_provision": true, 00:22:37.056 "num_allocated_clusters": 0, 00:22:37.056 "snapshot": false, 00:22:37.056 "clone": false, 00:22:37.056 "esnap_clone": false 00:22:37.056 } 00:22:37.056 } 00:22:37.056 } 00:22:37.056 ]' 00:22:37.056 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:37.056 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:37.056 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:37.056 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:37.056 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:37.056 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:37.056 10:59:26 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:37.056 10:59:26 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:37.335 10:59:26 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:37.335 10:59:26 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:37.335 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:37.335 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:37.335 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:37.335 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:37.335 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8ad2e6b9-ab59-46b3-b687-fbb00c452055 00:22:37.646 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:37.646 { 00:22:37.646 "name": "8ad2e6b9-ab59-46b3-b687-fbb00c452055", 00:22:37.646 "aliases": [ 00:22:37.646 "lvs/nvme0n1p0" 00:22:37.646 ], 00:22:37.646 "product_name": "Logical Volume", 00:22:37.646 "block_size": 4096, 00:22:37.646 "num_blocks": 26476544, 00:22:37.646 "uuid": "8ad2e6b9-ab59-46b3-b687-fbb00c452055", 00:22:37.646 "assigned_rate_limits": { 00:22:37.646 "rw_ios_per_sec": 0, 00:22:37.646 "rw_mbytes_per_sec": 0, 00:22:37.646 "r_mbytes_per_sec": 0, 00:22:37.646 "w_mbytes_per_sec": 0 00:22:37.646 }, 00:22:37.646 "claimed": false, 00:22:37.646 "zoned": false, 00:22:37.646 "supported_io_types": { 00:22:37.646 "read": true, 00:22:37.646 "write": true, 00:22:37.646 "unmap": true, 00:22:37.646 "flush": false, 00:22:37.646 "reset": true, 00:22:37.646 "nvme_admin": false, 00:22:37.646 "nvme_io": false, 00:22:37.646 "nvme_io_md": false, 00:22:37.646 "write_zeroes": true, 00:22:37.646 "zcopy": false, 00:22:37.646 "get_zone_info": false, 00:22:37.646 "zone_management": false, 00:22:37.646 "zone_append": false, 00:22:37.646 "compare": false, 00:22:37.646 "compare_and_write": false, 00:22:37.646 "abort": false, 00:22:37.646 "seek_hole": true, 00:22:37.646 "seek_data": true, 00:22:37.646 "copy": false, 00:22:37.646 "nvme_iov_md": false 00:22:37.646 }, 00:22:37.646 "driver_specific": { 00:22:37.646 "lvol": { 00:22:37.646 "lvol_store_uuid": "61bd11e1-2efb-45b1-a6e8-5b77514778f3", 00:22:37.646 "base_bdev": "nvme0n1", 00:22:37.646 "thin_provision": true, 00:22:37.646 "num_allocated_clusters": 0, 00:22:37.646 "snapshot": false, 00:22:37.646 "clone": false, 00:22:37.646 "esnap_clone": false 00:22:37.646 } 00:22:37.646 } 00:22:37.646 } 00:22:37.646 ]' 00:22:37.646 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:37.646 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:37.646 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:37.646 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:37.646 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:37.646 10:59:26 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:37.646 10:59:26 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:37.646 10:59:26 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8ad2e6b9-ab59-46b3-b687-fbb00c452055 --l2p_dram_limit 10' 00:22:37.646 10:59:26 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:37.646 10:59:26 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:37.646 10:59:26 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:37.646 10:59:26 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:37.646 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:37.646 10:59:26 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8ad2e6b9-ab59-46b3-b687-fbb00c452055 --l2p_dram_limit 10 -c nvc0n1p0 00:22:37.948 [2024-11-20 10:59:26.942791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.942840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:37.948 [2024-11-20 10:59:26.942876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:37.948 [2024-11-20 10:59:26.942887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.942949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.942961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:37.948 [2024-11-20 10:59:26.942974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:37.948 [2024-11-20 10:59:26.942984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.943012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:37.948 [2024-11-20 10:59:26.944035] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:37.948 [2024-11-20 10:59:26.944063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.944074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:37.948 [2024-11-20 10:59:26.944087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:22:37.948 [2024-11-20 10:59:26.944097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.944140] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7b1bdabc-6e72-4c1d-a5ab-4dbd8ece5857 00:22:37.948 [2024-11-20 10:59:26.945625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.945766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:37.948 [2024-11-20 10:59:26.945786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:37.948 [2024-11-20 10:59:26.945800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.953345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.953515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:37.948 [2024-11-20 10:59:26.953538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.423 ms 00:22:37.948 [2024-11-20 10:59:26.953552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.953670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.953688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:37.948 [2024-11-20 10:59:26.953700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:37.948 [2024-11-20 10:59:26.953718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.953773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.953788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:37.948 [2024-11-20 10:59:26.953799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:37.948 [2024-11-20 10:59:26.953815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.953840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:37.948 [2024-11-20 10:59:26.959294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.959330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:37.948 [2024-11-20 10:59:26.959347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.466 ms 00:22:37.948 [2024-11-20 10:59:26.959358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.959394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.948 [2024-11-20 10:59:26.959405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:37.948 [2024-11-20 10:59:26.959418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:37.948 [2024-11-20 10:59:26.959428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.948 [2024-11-20 10:59:26.959491] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:37.948 [2024-11-20 10:59:26.959638] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:37.948 [2024-11-20 10:59:26.959660] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:37.949 [2024-11-20 10:59:26.959674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:37.949 [2024-11-20 10:59:26.959690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:37.949 [2024-11-20 10:59:26.959713] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:37.949 [2024-11-20 10:59:26.959727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:37.949 [2024-11-20 10:59:26.959738] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:37.949 [2024-11-20 10:59:26.959753] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:37.949 [2024-11-20 10:59:26.959764] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:37.949 [2024-11-20 10:59:26.959777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.949 [2024-11-20 10:59:26.959787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:37.949 [2024-11-20 10:59:26.959800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:22:37.949 [2024-11-20 10:59:26.959819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.949 [2024-11-20 10:59:26.959900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.949 [2024-11-20 10:59:26.959911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:37.949 [2024-11-20 10:59:26.959923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:37.949 [2024-11-20 10:59:26.959933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.949 [2024-11-20 10:59:26.960066] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:37.949 [2024-11-20 10:59:26.960080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:37.949 [2024-11-20 10:59:26.960094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:37.949 [2024-11-20 10:59:26.960128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:37.949 [2024-11-20 10:59:26.960163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.949 [2024-11-20 10:59:26.960185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:37.949 [2024-11-20 10:59:26.960196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:37.949 [2024-11-20 10:59:26.960209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:37.949 [2024-11-20 10:59:26.960219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:37.949 [2024-11-20 10:59:26.960231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:37.949 [2024-11-20 10:59:26.960240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:37.949 [2024-11-20 10:59:26.960264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:37.949 [2024-11-20 10:59:26.960299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:37.949 [2024-11-20 10:59:26.960331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:37.949 [2024-11-20 10:59:26.960364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:37.949 [2024-11-20 10:59:26.960395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:37.949 [2024-11-20 10:59:26.960431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.949 [2024-11-20 10:59:26.960452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:37.949 [2024-11-20 10:59:26.960462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:37.949 [2024-11-20 10:59:26.960473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:37.949 [2024-11-20 10:59:26.960483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:37.949 [2024-11-20 10:59:26.960494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:37.949 [2024-11-20 10:59:26.960504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:37.949 [2024-11-20 10:59:26.960525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:37.949 [2024-11-20 10:59:26.960537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960548] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:37.949 [2024-11-20 10:59:26.960561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:37.949 [2024-11-20 10:59:26.960572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:37.949 [2024-11-20 10:59:26.960597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:37.949 [2024-11-20 10:59:26.960621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:37.949 [2024-11-20 10:59:26.960631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:37.949 [2024-11-20 10:59:26.960644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:37.949 [2024-11-20 10:59:26.960654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:37.949 [2024-11-20 10:59:26.960666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:37.949 [2024-11-20 10:59:26.960694] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:37.949 [2024-11-20 10:59:26.960713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.949 [2024-11-20 10:59:26.960728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:37.949 [2024-11-20 10:59:26.960741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:37.949 [2024-11-20 10:59:26.960752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:37.949 [2024-11-20 10:59:26.960766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:37.949 [2024-11-20 10:59:26.960776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:37.949 [2024-11-20 10:59:26.960789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:37.949 [2024-11-20 10:59:26.960800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:37.949 [2024-11-20 10:59:26.960813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:37.949 [2024-11-20 10:59:26.960823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:37.949 [2024-11-20 10:59:26.960840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:37.949 [2024-11-20 10:59:26.960850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:37.949 [2024-11-20 10:59:26.960863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:37.949 [2024-11-20 10:59:26.960874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:37.949 [2024-11-20 10:59:26.960888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:37.949 [2024-11-20 10:59:26.960899] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:37.949 [2024-11-20 10:59:26.960913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:37.949 [2024-11-20 10:59:26.960924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:37.949 [2024-11-20 10:59:26.960937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:37.949 [2024-11-20 10:59:26.960948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:37.949 [2024-11-20 10:59:26.960961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:37.949 [2024-11-20 10:59:26.960975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.949 [2024-11-20 10:59:26.960989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:37.949 [2024-11-20 10:59:26.960999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:22:37.949 [2024-11-20 10:59:26.961012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.949 [2024-11-20 10:59:26.961057] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:37.949 [2024-11-20 10:59:26.961075] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:42.237 [2024-11-20 10:59:30.610776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.610841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:42.237 [2024-11-20 10:59:30.610874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3655.643 ms 00:22:42.237 [2024-11-20 10:59:30.610887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.237 [2024-11-20 10:59:30.646906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.646958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:42.237 [2024-11-20 10:59:30.646974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.800 ms 00:22:42.237 [2024-11-20 10:59:30.646987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.237 [2024-11-20 10:59:30.647108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.647124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:42.237 [2024-11-20 10:59:30.647135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:42.237 [2024-11-20 10:59:30.647152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.237 [2024-11-20 10:59:30.689758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.689973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.237 [2024-11-20 10:59:30.689995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.631 ms 00:22:42.237 [2024-11-20 10:59:30.690010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.237 [2024-11-20 10:59:30.690046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.690062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.237 [2024-11-20 10:59:30.690072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:42.237 [2024-11-20 10:59:30.690084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.237 [2024-11-20 10:59:30.690575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.690611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.237 [2024-11-20 10:59:30.690622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:22:42.237 [2024-11-20 10:59:30.690651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.237 [2024-11-20 10:59:30.690747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.690761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.237 [2024-11-20 10:59:30.690775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:42.237 [2024-11-20 10:59:30.690789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.237 [2024-11-20 10:59:30.710976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.711016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.237 [2024-11-20 10:59:30.711029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.193 ms 00:22:42.237 [2024-11-20 10:59:30.711058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.237 [2024-11-20 10:59:30.723125] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:42.237 [2024-11-20 10:59:30.726304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.237 [2024-11-20 10:59:30.726331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:42.237 [2024-11-20 10:59:30.726346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.188 ms 00:22:42.237 [2024-11-20 10:59:30.726372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:30.829018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:30.829071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:42.238 [2024-11-20 10:59:30.829090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.774 ms 00:22:42.238 [2024-11-20 10:59:30.829101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:30.829280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:30.829296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:42.238 [2024-11-20 10:59:30.829313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:22:42.238 [2024-11-20 10:59:30.829323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:30.864643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:30.864681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:42.238 [2024-11-20 10:59:30.864697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.322 ms 00:22:42.238 [2024-11-20 10:59:30.864723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:30.899085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:30.899223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:42.238 [2024-11-20 10:59:30.899266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.370 ms 00:22:42.238 [2024-11-20 10:59:30.899276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:30.900023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:30.900040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:42.238 [2024-11-20 10:59:30.900055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:22:42.238 [2024-11-20 10:59:30.900065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:31.000186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:31.000243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:42.238 [2024-11-20 10:59:31.000264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.203 ms 00:22:42.238 [2024-11-20 10:59:31.000275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:31.035982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:31.036021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:42.238 [2024-11-20 10:59:31.036036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.652 ms 00:22:42.238 [2024-11-20 10:59:31.036046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:31.070843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:31.070994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:42.238 [2024-11-20 10:59:31.071019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.811 ms 00:22:42.238 [2024-11-20 10:59:31.071029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:31.106326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:31.106362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:42.238 [2024-11-20 10:59:31.106377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.313 ms 00:22:42.238 [2024-11-20 10:59:31.106387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:31.106430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:31.106441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:42.238 [2024-11-20 10:59:31.106456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:42.238 [2024-11-20 10:59:31.106466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:31.106576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.238 [2024-11-20 10:59:31.106588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:42.238 [2024-11-20 10:59:31.106623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:42.238 [2024-11-20 10:59:31.106649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.238 [2024-11-20 10:59:31.107665] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4171.214 ms, result 0 00:22:42.238 { 00:22:42.238 "name": "ftl0", 00:22:42.238 "uuid": "7b1bdabc-6e72-4c1d-a5ab-4dbd8ece5857" 00:22:42.238 } 00:22:42.238 10:59:31 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:42.238 10:59:31 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:42.238 10:59:31 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:42.238 10:59:31 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:42.498 [2024-11-20 10:59:31.526263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.526314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:42.498 [2024-11-20 10:59:31.526328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:42.498 [2024-11-20 10:59:31.526349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.526375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:42.498 [2024-11-20 10:59:31.530391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.530419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:42.498 [2024-11-20 10:59:31.530433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:22:42.498 [2024-11-20 10:59:31.530442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.530729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.530746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:42.498 [2024-11-20 10:59:31.530762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:42.498 [2024-11-20 10:59:31.530772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.533293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.533414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:42.498 [2024-11-20 10:59:31.533454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:22:42.498 [2024-11-20 10:59:31.533465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.538337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.538366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:42.498 [2024-11-20 10:59:31.538383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 00:22:42.498 [2024-11-20 10:59:31.538393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.573403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.573452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:42.498 [2024-11-20 10:59:31.573469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.997 ms 00:22:42.498 [2024-11-20 10:59:31.573495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.594812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.594849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:42.498 [2024-11-20 10:59:31.594865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.304 ms 00:22:42.498 [2024-11-20 10:59:31.594891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.595033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.595047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:42.498 [2024-11-20 10:59:31.595060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:42.498 [2024-11-20 10:59:31.595070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.629512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.629673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:42.498 [2024-11-20 10:59:31.629755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.475 ms 00:22:42.498 [2024-11-20 10:59:31.629791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.498 [2024-11-20 10:59:31.663418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.498 [2024-11-20 10:59:31.663541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:42.498 [2024-11-20 10:59:31.663651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.588 ms 00:22:42.499 [2024-11-20 10:59:31.663687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.499 [2024-11-20 10:59:31.697900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.499 [2024-11-20 10:59:31.698023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:42.499 [2024-11-20 10:59:31.698120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.160 ms 00:22:42.499 [2024-11-20 10:59:31.698154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.499 [2024-11-20 10:59:31.732423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.499 [2024-11-20 10:59:31.732561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:42.499 [2024-11-20 10:59:31.732719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.150 ms 00:22:42.499 [2024-11-20 10:59:31.732735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.499 [2024-11-20 10:59:31.732775] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:42.499 [2024-11-20 10:59:31.732801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.732999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:42.499 [2024-11-20 10:59:31.733742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:42.500 [2024-11-20 10:59:31.733992] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:42.500 [2024-11-20 10:59:31.734007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b1bdabc-6e72-4c1d-a5ab-4dbd8ece5857 00:22:42.500 [2024-11-20 10:59:31.734018] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:42.500 [2024-11-20 10:59:31.734032] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:42.500 [2024-11-20 10:59:31.734041] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:42.500 [2024-11-20 10:59:31.734057] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:42.500 [2024-11-20 10:59:31.734067] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:42.500 [2024-11-20 10:59:31.734079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:42.500 [2024-11-20 10:59:31.734088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:42.500 [2024-11-20 10:59:31.734099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:42.500 [2024-11-20 10:59:31.734108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:42.500 [2024-11-20 10:59:31.734119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.500 [2024-11-20 10:59:31.734129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:42.500 [2024-11-20 10:59:31.734142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:22:42.500 [2024-11-20 10:59:31.734151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.759 [2024-11-20 10:59:31.753499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.759 [2024-11-20 10:59:31.753531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:42.759 [2024-11-20 10:59:31.753546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.323 ms 00:22:42.759 [2024-11-20 10:59:31.753555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.759 [2024-11-20 10:59:31.754147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.759 [2024-11-20 10:59:31.754165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:42.759 [2024-11-20 10:59:31.754179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:22:42.759 [2024-11-20 10:59:31.754191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.759 [2024-11-20 10:59:31.816110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.759 [2024-11-20 10:59:31.816143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.759 [2024-11-20 10:59:31.816157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.759 [2024-11-20 10:59:31.816167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.759 [2024-11-20 10:59:31.816219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.759 [2024-11-20 10:59:31.816229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.759 [2024-11-20 10:59:31.816240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.759 [2024-11-20 10:59:31.816253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.759 [2024-11-20 10:59:31.816329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.759 [2024-11-20 10:59:31.816342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.759 [2024-11-20 10:59:31.816354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.759 [2024-11-20 10:59:31.816363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.759 [2024-11-20 10:59:31.816386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.759 [2024-11-20 10:59:31.816396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.759 [2024-11-20 10:59:31.816408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.759 [2024-11-20 10:59:31.816417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.759 [2024-11-20 10:59:31.936005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.759 [2024-11-20 10:59:31.936182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.759 [2024-11-20 10:59:31.936209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.759 [2024-11-20 10:59:31.936220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.019 [2024-11-20 10:59:32.033109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.019 [2024-11-20 10:59:32.033156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:43.019 [2024-11-20 10:59:32.033172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.019 [2024-11-20 10:59:32.033201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.019 [2024-11-20 10:59:32.033308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.019 [2024-11-20 10:59:32.033320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:43.019 [2024-11-20 10:59:32.033334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.019 [2024-11-20 10:59:32.033344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.019 [2024-11-20 10:59:32.033396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.019 [2024-11-20 10:59:32.033408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:43.019 [2024-11-20 10:59:32.033421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.019 [2024-11-20 10:59:32.033430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.019 [2024-11-20 10:59:32.033546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.019 [2024-11-20 10:59:32.033559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:43.019 [2024-11-20 10:59:32.033571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.019 [2024-11-20 10:59:32.033581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.019 [2024-11-20 10:59:32.033645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.019 [2024-11-20 10:59:32.033658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:43.019 [2024-11-20 10:59:32.033671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.019 [2024-11-20 10:59:32.033697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.019 [2024-11-20 10:59:32.033739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.019 [2024-11-20 10:59:32.033753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:43.019 [2024-11-20 10:59:32.033766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.019 [2024-11-20 10:59:32.033776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.019 [2024-11-20 10:59:32.033824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:43.019 [2024-11-20 10:59:32.033843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:43.019 [2024-11-20 10:59:32.033856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:43.019 [2024-11-20 10:59:32.033866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.019 [2024-11-20 10:59:32.033997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.523 ms, result 0 00:22:43.019 true 00:22:43.019 10:59:32 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78739 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78739 ']' 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78739 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78739 00:22:43.019 killing process with pid 78739 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78739' 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78739 00:22:43.019 10:59:32 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78739 00:22:48.327 10:59:36 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:22:51.614 262144+0 records in 00:22:51.614 262144+0 records out 00:22:51.614 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.91446 s, 274 MB/s 00:22:51.614 10:59:40 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:53.534 10:59:42 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:53.534 [2024-11-20 10:59:42.405212] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:22:53.534 [2024-11-20 10:59:42.405314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78971 ] 00:22:53.534 [2024-11-20 10:59:42.589130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.534 [2024-11-20 10:59:42.695657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.799 [2024-11-20 10:59:43.036850] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:53.799 [2024-11-20 10:59:43.036917] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:54.059 [2024-11-20 10:59:43.203191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.203426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:54.059 [2024-11-20 10:59:43.203478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:54.059 [2024-11-20 10:59:43.203489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.203550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.203563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.059 [2024-11-20 10:59:43.203581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:54.059 [2024-11-20 10:59:43.203591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.203635] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:54.059 [2024-11-20 10:59:43.204664] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:54.059 [2024-11-20 10:59:43.204685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.204696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.059 [2024-11-20 10:59:43.204706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:22:54.059 [2024-11-20 10:59:43.204716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.206132] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:54.059 [2024-11-20 10:59:43.223844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.223881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:54.059 [2024-11-20 10:59:43.223895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.741 ms 00:22:54.059 [2024-11-20 10:59:43.223921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.223990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.224002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:54.059 [2024-11-20 10:59:43.224013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:54.059 [2024-11-20 10:59:43.224023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.230651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.230678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.059 [2024-11-20 10:59:43.230690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.566 ms 00:22:54.059 [2024-11-20 10:59:43.230700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.230799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.230812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.059 [2024-11-20 10:59:43.230822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:54.059 [2024-11-20 10:59:43.230831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.230866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.230877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:54.059 [2024-11-20 10:59:43.230886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:54.059 [2024-11-20 10:59:43.230896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.230917] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:54.059 [2024-11-20 10:59:43.235513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.235691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.059 [2024-11-20 10:59:43.235712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.608 ms 00:22:54.059 [2024-11-20 10:59:43.235733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.235766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.059 [2024-11-20 10:59:43.235778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:54.059 [2024-11-20 10:59:43.235788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:54.059 [2024-11-20 10:59:43.235798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.059 [2024-11-20 10:59:43.235848] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:54.059 [2024-11-20 10:59:43.235875] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:54.059 [2024-11-20 10:59:43.235910] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:54.059 [2024-11-20 10:59:43.235932] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:54.059 [2024-11-20 10:59:43.236019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:54.059 [2024-11-20 10:59:43.236032] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:54.059 [2024-11-20 10:59:43.236045] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:54.059 [2024-11-20 10:59:43.236058] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:54.059 [2024-11-20 10:59:43.236070] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:54.059 [2024-11-20 10:59:43.236080] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:54.060 [2024-11-20 10:59:43.236090] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:54.060 [2024-11-20 10:59:43.236100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:54.060 [2024-11-20 10:59:43.236110] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:54.060 [2024-11-20 10:59:43.236126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.060 [2024-11-20 10:59:43.236136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:54.060 [2024-11-20 10:59:43.236146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:22:54.060 [2024-11-20 10:59:43.236156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.060 [2024-11-20 10:59:43.236227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.060 [2024-11-20 10:59:43.236237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:54.060 [2024-11-20 10:59:43.236247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:54.060 [2024-11-20 10:59:43.236257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.060 [2024-11-20 10:59:43.236349] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:54.060 [2024-11-20 10:59:43.236366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:54.060 [2024-11-20 10:59:43.236377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:54.060 [2024-11-20 10:59:43.236406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:54.060 [2024-11-20 10:59:43.236435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.060 [2024-11-20 10:59:43.236454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:54.060 [2024-11-20 10:59:43.236463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:54.060 [2024-11-20 10:59:43.236472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:54.060 [2024-11-20 10:59:43.236481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:54.060 [2024-11-20 10:59:43.236490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:54.060 [2024-11-20 10:59:43.236509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:54.060 [2024-11-20 10:59:43.236527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:54.060 [2024-11-20 10:59:43.236554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:54.060 [2024-11-20 10:59:43.236581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:54.060 [2024-11-20 10:59:43.236630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:54.060 [2024-11-20 10:59:43.236658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:54.060 [2024-11-20 10:59:43.236685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.060 [2024-11-20 10:59:43.236703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:54.060 [2024-11-20 10:59:43.236712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:54.060 [2024-11-20 10:59:43.236721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:54.060 [2024-11-20 10:59:43.236730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:54.060 [2024-11-20 10:59:43.236739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:54.060 [2024-11-20 10:59:43.236748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:54.060 [2024-11-20 10:59:43.236766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:54.060 [2024-11-20 10:59:43.236776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236786] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:54.060 [2024-11-20 10:59:43.236796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:54.060 [2024-11-20 10:59:43.236805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:54.060 [2024-11-20 10:59:43.236825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:54.060 [2024-11-20 10:59:43.236834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:54.060 [2024-11-20 10:59:43.236843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:54.060 [2024-11-20 10:59:43.236852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:54.060 [2024-11-20 10:59:43.236861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:54.060 [2024-11-20 10:59:43.236870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:54.060 [2024-11-20 10:59:43.236880] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:54.060 [2024-11-20 10:59:43.236892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.060 [2024-11-20 10:59:43.236904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:54.060 [2024-11-20 10:59:43.236914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:54.060 [2024-11-20 10:59:43.236924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:54.060 [2024-11-20 10:59:43.236934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:54.060 [2024-11-20 10:59:43.236944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:54.060 [2024-11-20 10:59:43.236954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:54.060 [2024-11-20 10:59:43.236965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:54.060 [2024-11-20 10:59:43.236975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:54.060 [2024-11-20 10:59:43.236985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:54.060 [2024-11-20 10:59:43.236995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:54.060 [2024-11-20 10:59:43.237006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:54.060 [2024-11-20 10:59:43.237016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:54.060 [2024-11-20 10:59:43.237026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:54.060 [2024-11-20 10:59:43.237036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:54.060 [2024-11-20 10:59:43.237046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:54.060 [2024-11-20 10:59:43.237061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:54.060 [2024-11-20 10:59:43.237073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:54.060 [2024-11-20 10:59:43.237083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:54.060 [2024-11-20 10:59:43.237093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:54.060 [2024-11-20 10:59:43.237104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:54.060 [2024-11-20 10:59:43.237126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.060 [2024-11-20 10:59:43.237136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:54.060 [2024-11-20 10:59:43.237145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:22:54.060 [2024-11-20 10:59:43.237154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.060 [2024-11-20 10:59:43.273935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.060 [2024-11-20 10:59:43.274099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.060 [2024-11-20 10:59:43.274120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.798 ms 00:22:54.060 [2024-11-20 10:59:43.274131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.060 [2024-11-20 10:59:43.274213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.060 [2024-11-20 10:59:43.274225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:54.060 [2024-11-20 10:59:43.274235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:54.060 [2024-11-20 10:59:43.274245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.331524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.331560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.320 [2024-11-20 10:59:43.331572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.320 ms 00:22:54.320 [2024-11-20 10:59:43.331599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.331643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.331654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.320 [2024-11-20 10:59:43.331664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:54.320 [2024-11-20 10:59:43.331683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.332184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.332203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.320 [2024-11-20 10:59:43.332214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:22:54.320 [2024-11-20 10:59:43.332224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.332340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.332353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.320 [2024-11-20 10:59:43.332364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:54.320 [2024-11-20 10:59:43.332382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.349568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.349610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.320 [2024-11-20 10:59:43.349626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.194 ms 00:22:54.320 [2024-11-20 10:59:43.349636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.368243] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:54.320 [2024-11-20 10:59:43.368279] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:54.320 [2024-11-20 10:59:43.368293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.368303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:54.320 [2024-11-20 10:59:43.368313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.579 ms 00:22:54.320 [2024-11-20 10:59:43.368322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.395999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.396036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:54.320 [2024-11-20 10:59:43.396054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.684 ms 00:22:54.320 [2024-11-20 10:59:43.396063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.413268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.413312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:54.320 [2024-11-20 10:59:43.413324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.194 ms 00:22:54.320 [2024-11-20 10:59:43.413348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.431012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.431169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:54.320 [2024-11-20 10:59:43.431204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.657 ms 00:22:54.320 [2024-11-20 10:59:43.431214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.431903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.431928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:54.320 [2024-11-20 10:59:43.431940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:22:54.320 [2024-11-20 10:59:43.431950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.513844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.513897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:54.320 [2024-11-20 10:59:43.513912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.002 ms 00:22:54.320 [2024-11-20 10:59:43.513932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.524088] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:54.320 [2024-11-20 10:59:43.526384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.526413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:54.320 [2024-11-20 10:59:43.526426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.429 ms 00:22:54.320 [2024-11-20 10:59:43.526437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.526520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.526533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:54.320 [2024-11-20 10:59:43.526544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:54.320 [2024-11-20 10:59:43.526554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.526658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.526671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:54.320 [2024-11-20 10:59:43.526682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:54.320 [2024-11-20 10:59:43.526692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.526716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.526728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:54.320 [2024-11-20 10:59:43.526738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.320 [2024-11-20 10:59:43.526748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.526785] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:54.320 [2024-11-20 10:59:43.526797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.526813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:54.320 [2024-11-20 10:59:43.526823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:54.320 [2024-11-20 10:59:43.526833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.561537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.561574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:54.320 [2024-11-20 10:59:43.561587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.738 ms 00:22:54.320 [2024-11-20 10:59:43.561625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.561709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.320 [2024-11-20 10:59:43.561721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:54.320 [2024-11-20 10:59:43.561732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:54.320 [2024-11-20 10:59:43.561741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.320 [2024-11-20 10:59:43.562899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 359.777 ms, result 0 00:22:55.697  [2024-11-20T10:59:45.886Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-20T10:59:46.824Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-20T10:59:47.776Z] Copying: 70/1024 [MB] (24 MBps) [2024-11-20T10:59:48.713Z] Copying: 94/1024 [MB] (23 MBps) [2024-11-20T10:59:49.650Z] Copying: 117/1024 [MB] (23 MBps) [2024-11-20T10:59:50.586Z] Copying: 141/1024 [MB] (23 MBps) [2024-11-20T10:59:51.964Z] Copying: 165/1024 [MB] (23 MBps) [2024-11-20T10:59:52.902Z] Copying: 189/1024 [MB] (24 MBps) [2024-11-20T10:59:53.837Z] Copying: 213/1024 [MB] (24 MBps) [2024-11-20T10:59:54.773Z] Copying: 237/1024 [MB] (24 MBps) [2024-11-20T10:59:55.709Z] Copying: 261/1024 [MB] (24 MBps) [2024-11-20T10:59:56.645Z] Copying: 286/1024 [MB] (24 MBps) [2024-11-20T10:59:57.582Z] Copying: 310/1024 [MB] (23 MBps) [2024-11-20T10:59:58.957Z] Copying: 334/1024 [MB] (23 MBps) [2024-11-20T10:59:59.893Z] Copying: 358/1024 [MB] (24 MBps) [2024-11-20T11:00:00.829Z] Copying: 383/1024 [MB] (24 MBps) [2024-11-20T11:00:01.768Z] Copying: 407/1024 [MB] (24 MBps) [2024-11-20T11:00:02.704Z] Copying: 432/1024 [MB] (24 MBps) [2024-11-20T11:00:03.642Z] Copying: 457/1024 [MB] (24 MBps) [2024-11-20T11:00:04.579Z] Copying: 482/1024 [MB] (25 MBps) [2024-11-20T11:00:05.958Z] Copying: 506/1024 [MB] (23 MBps) [2024-11-20T11:00:06.895Z] Copying: 530/1024 [MB] (24 MBps) [2024-11-20T11:00:07.832Z] Copying: 553/1024 [MB] (23 MBps) [2024-11-20T11:00:08.768Z] Copying: 577/1024 [MB] (23 MBps) [2024-11-20T11:00:09.706Z] Copying: 600/1024 [MB] (23 MBps) [2024-11-20T11:00:10.642Z] Copying: 623/1024 [MB] (23 MBps) [2024-11-20T11:00:11.582Z] Copying: 647/1024 [MB] (23 MBps) [2024-11-20T11:00:12.959Z] Copying: 670/1024 [MB] (23 MBps) [2024-11-20T11:00:13.527Z] Copying: 694/1024 [MB] (23 MBps) [2024-11-20T11:00:14.905Z] Copying: 717/1024 [MB] (23 MBps) [2024-11-20T11:00:15.842Z] Copying: 741/1024 [MB] (23 MBps) [2024-11-20T11:00:16.779Z] Copying: 765/1024 [MB] (24 MBps) [2024-11-20T11:00:17.716Z] Copying: 789/1024 [MB] (23 MBps) [2024-11-20T11:00:18.653Z] Copying: 812/1024 [MB] (23 MBps) [2024-11-20T11:00:19.590Z] Copying: 835/1024 [MB] (22 MBps) [2024-11-20T11:00:20.605Z] Copying: 858/1024 [MB] (22 MBps) [2024-11-20T11:00:21.600Z] Copying: 882/1024 [MB] (23 MBps) [2024-11-20T11:00:22.537Z] Copying: 905/1024 [MB] (23 MBps) [2024-11-20T11:00:23.912Z] Copying: 928/1024 [MB] (23 MBps) [2024-11-20T11:00:24.847Z] Copying: 952/1024 [MB] (23 MBps) [2024-11-20T11:00:25.781Z] Copying: 975/1024 [MB] (23 MBps) [2024-11-20T11:00:26.715Z] Copying: 999/1024 [MB] (23 MBps) [2024-11-20T11:00:26.715Z] Copying: 1022/1024 [MB] (23 MBps) [2024-11-20T11:00:26.715Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-20 11:00:26.573409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.462 [2024-11-20 11:00:26.573455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:37.462 [2024-11-20 11:00:26.573471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:37.462 [2024-11-20 11:00:26.573482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.462 [2024-11-20 11:00:26.573504] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:37.462 [2024-11-20 11:00:26.577618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.462 [2024-11-20 11:00:26.577649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:37.462 [2024-11-20 11:00:26.577660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.104 ms 00:23:37.462 [2024-11-20 11:00:26.577669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.462 [2024-11-20 11:00:26.580117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.462 [2024-11-20 11:00:26.580264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:37.462 [2024-11-20 11:00:26.580285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:23:37.462 [2024-11-20 11:00:26.580296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.462 [2024-11-20 11:00:26.598052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.462 [2024-11-20 11:00:26.598101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:37.462 [2024-11-20 11:00:26.598114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.762 ms 00:23:37.463 [2024-11-20 11:00:26.598140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.463 [2024-11-20 11:00:26.603109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.463 [2024-11-20 11:00:26.603246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:37.463 [2024-11-20 11:00:26.603339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.943 ms 00:23:37.463 [2024-11-20 11:00:26.603374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.463 [2024-11-20 11:00:26.638517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.463 [2024-11-20 11:00:26.638683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:37.463 [2024-11-20 11:00:26.638781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.119 ms 00:23:37.463 [2024-11-20 11:00:26.638817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.463 [2024-11-20 11:00:26.661662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.463 [2024-11-20 11:00:26.661798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:37.463 [2024-11-20 11:00:26.661817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.824 ms 00:23:37.463 [2024-11-20 11:00:26.661843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.463 [2024-11-20 11:00:26.661975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.463 [2024-11-20 11:00:26.661990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:37.463 [2024-11-20 11:00:26.662007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:37.463 [2024-11-20 11:00:26.662016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.463 [2024-11-20 11:00:26.697701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.463 [2024-11-20 11:00:26.697734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:37.463 [2024-11-20 11:00:26.697746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.727 ms 00:23:37.463 [2024-11-20 11:00:26.697771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.722 [2024-11-20 11:00:26.732398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.722 [2024-11-20 11:00:26.732433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:37.722 [2024-11-20 11:00:26.732456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.648 ms 00:23:37.722 [2024-11-20 11:00:26.732465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.722 [2024-11-20 11:00:26.766569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.722 [2024-11-20 11:00:26.766608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:37.722 [2024-11-20 11:00:26.766620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.125 ms 00:23:37.722 [2024-11-20 11:00:26.766628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.722 [2024-11-20 11:00:26.801523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.722 [2024-11-20 11:00:26.801558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:37.722 [2024-11-20 11:00:26.801570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.883 ms 00:23:37.722 [2024-11-20 11:00:26.801579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.722 [2024-11-20 11:00:26.801623] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:37.722 [2024-11-20 11:00:26.801659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:37.722 [2024-11-20 11:00:26.801902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.801993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:37.723 [2024-11-20 11:00:26.802715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:37.723 [2024-11-20 11:00:26.802730] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b1bdabc-6e72-4c1d-a5ab-4dbd8ece5857 00:23:37.723 [2024-11-20 11:00:26.802740] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:37.723 [2024-11-20 11:00:26.802753] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:37.723 [2024-11-20 11:00:26.802762] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:37.723 [2024-11-20 11:00:26.802772] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:37.723 [2024-11-20 11:00:26.802782] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:37.723 [2024-11-20 11:00:26.802791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:37.723 [2024-11-20 11:00:26.802801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:37.723 [2024-11-20 11:00:26.802819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:37.723 [2024-11-20 11:00:26.802828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:37.723 [2024-11-20 11:00:26.802837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.724 [2024-11-20 11:00:26.802847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:37.724 [2024-11-20 11:00:26.802858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:23:37.724 [2024-11-20 11:00:26.802867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.724 [2024-11-20 11:00:26.821982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.724 [2024-11-20 11:00:26.822014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:37.724 [2024-11-20 11:00:26.822026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.111 ms 00:23:37.724 [2024-11-20 11:00:26.822035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.724 [2024-11-20 11:00:26.822557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.724 [2024-11-20 11:00:26.822569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:37.724 [2024-11-20 11:00:26.822579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:23:37.724 [2024-11-20 11:00:26.822588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.724 [2024-11-20 11:00:26.870648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.724 [2024-11-20 11:00:26.870820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:37.724 [2024-11-20 11:00:26.870840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.724 [2024-11-20 11:00:26.870850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.724 [2024-11-20 11:00:26.870904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.724 [2024-11-20 11:00:26.870914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:37.724 [2024-11-20 11:00:26.870925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.724 [2024-11-20 11:00:26.870934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.724 [2024-11-20 11:00:26.871003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.724 [2024-11-20 11:00:26.871016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:37.724 [2024-11-20 11:00:26.871026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.724 [2024-11-20 11:00:26.871036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.724 [2024-11-20 11:00:26.871052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.724 [2024-11-20 11:00:26.871062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:37.724 [2024-11-20 11:00:26.871072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.724 [2024-11-20 11:00:26.871082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.981 [2024-11-20 11:00:26.987168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.981 [2024-11-20 11:00:26.987220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:37.981 [2024-11-20 11:00:26.987235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.981 [2024-11-20 11:00:26.987245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.981 [2024-11-20 11:00:27.081740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.981 [2024-11-20 11:00:27.081787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:37.981 [2024-11-20 11:00:27.081802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.981 [2024-11-20 11:00:27.081812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.981 [2024-11-20 11:00:27.081888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.981 [2024-11-20 11:00:27.081905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:37.981 [2024-11-20 11:00:27.081916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.981 [2024-11-20 11:00:27.081925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.981 [2024-11-20 11:00:27.081960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.981 [2024-11-20 11:00:27.081970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:37.981 [2024-11-20 11:00:27.081979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.981 [2024-11-20 11:00:27.081989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.981 [2024-11-20 11:00:27.082200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.982 [2024-11-20 11:00:27.082217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:37.982 [2024-11-20 11:00:27.082228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.982 [2024-11-20 11:00:27.082237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.982 [2024-11-20 11:00:27.082270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.982 [2024-11-20 11:00:27.082282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:37.982 [2024-11-20 11:00:27.082292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.982 [2024-11-20 11:00:27.082300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.982 [2024-11-20 11:00:27.082335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.982 [2024-11-20 11:00:27.082345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:37.982 [2024-11-20 11:00:27.082358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.982 [2024-11-20 11:00:27.082367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.982 [2024-11-20 11:00:27.082406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.982 [2024-11-20 11:00:27.082417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:37.982 [2024-11-20 11:00:27.082427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.982 [2024-11-20 11:00:27.082436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.982 [2024-11-20 11:00:27.082557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 509.932 ms, result 0 00:23:39.355 00:23:39.355 00:23:39.355 11:00:28 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:39.355 [2024-11-20 11:00:28.534704] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:23:39.355 [2024-11-20 11:00:28.535159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79433 ] 00:23:39.613 [2024-11-20 11:00:28.709115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.613 [2024-11-20 11:00:28.816184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.181 [2024-11-20 11:00:29.158073] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.181 [2024-11-20 11:00:29.158141] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.181 [2024-11-20 11:00:29.317571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.317630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.181 [2024-11-20 11:00:29.317667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:40.181 [2024-11-20 11:00:29.317677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.317724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.317736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.181 [2024-11-20 11:00:29.317750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:40.181 [2024-11-20 11:00:29.317769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.317789] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.181 [2024-11-20 11:00:29.318817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.181 [2024-11-20 11:00:29.318840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.318851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.181 [2024-11-20 11:00:29.318862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.057 ms 00:23:40.181 [2024-11-20 11:00:29.318872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.320293] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.181 [2024-11-20 11:00:29.338463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.338518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.181 [2024-11-20 11:00:29.338533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.200 ms 00:23:40.181 [2024-11-20 11:00:29.338543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.338635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.338648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.181 [2024-11-20 11:00:29.338659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:40.181 [2024-11-20 11:00:29.338669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.345284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.345465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.181 [2024-11-20 11:00:29.345501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.556 ms 00:23:40.181 [2024-11-20 11:00:29.345512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.345598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.345620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.181 [2024-11-20 11:00:29.345632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:40.181 [2024-11-20 11:00:29.345642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.345682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.345695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.181 [2024-11-20 11:00:29.345705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:40.181 [2024-11-20 11:00:29.345715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.345739] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.181 [2024-11-20 11:00:29.350482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.350517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.181 [2024-11-20 11:00:29.350529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.756 ms 00:23:40.181 [2024-11-20 11:00:29.350558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.350587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.181 [2024-11-20 11:00:29.350598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.181 [2024-11-20 11:00:29.350623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:40.181 [2024-11-20 11:00:29.350634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.181 [2024-11-20 11:00:29.350703] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.181 [2024-11-20 11:00:29.350726] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.181 [2024-11-20 11:00:29.350760] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.181 [2024-11-20 11:00:29.350781] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:40.181 [2024-11-20 11:00:29.350869] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.181 [2024-11-20 11:00:29.350882] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.181 [2024-11-20 11:00:29.350895] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.181 [2024-11-20 11:00:29.350908] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.181 [2024-11-20 11:00:29.350920] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.181 [2024-11-20 11:00:29.350931] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:40.182 [2024-11-20 11:00:29.350941] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.182 [2024-11-20 11:00:29.350951] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.182 [2024-11-20 11:00:29.350960] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.182 [2024-11-20 11:00:29.350974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.182 [2024-11-20 11:00:29.350984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.182 [2024-11-20 11:00:29.350994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:23:40.182 [2024-11-20 11:00:29.351003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.182 [2024-11-20 11:00:29.351078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.182 [2024-11-20 11:00:29.351093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.182 [2024-11-20 11:00:29.351104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:40.182 [2024-11-20 11:00:29.351114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.182 [2024-11-20 11:00:29.351206] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.182 [2024-11-20 11:00:29.351224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.182 [2024-11-20 11:00:29.351234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.182 [2024-11-20 11:00:29.351263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.182 [2024-11-20 11:00:29.351291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.182 [2024-11-20 11:00:29.351309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.182 [2024-11-20 11:00:29.351320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:40.182 [2024-11-20 11:00:29.351329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.182 [2024-11-20 11:00:29.351338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.182 [2024-11-20 11:00:29.351348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:40.182 [2024-11-20 11:00:29.351366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.182 [2024-11-20 11:00:29.351385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.182 [2024-11-20 11:00:29.351412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.182 [2024-11-20 11:00:29.351440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.182 [2024-11-20 11:00:29.351467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.182 [2024-11-20 11:00:29.351494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.182 [2024-11-20 11:00:29.351521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.182 [2024-11-20 11:00:29.351539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.182 [2024-11-20 11:00:29.351549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:40.182 [2024-11-20 11:00:29.351557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.182 [2024-11-20 11:00:29.351566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.182 [2024-11-20 11:00:29.351575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:40.182 [2024-11-20 11:00:29.351584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.182 [2024-11-20 11:00:29.351601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:40.182 [2024-11-20 11:00:29.351622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351632] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.182 [2024-11-20 11:00:29.351650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.182 [2024-11-20 11:00:29.351659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.182 [2024-11-20 11:00:29.351679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.182 [2024-11-20 11:00:29.351688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.182 [2024-11-20 11:00:29.351698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.182 [2024-11-20 11:00:29.351707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.182 [2024-11-20 11:00:29.351716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.182 [2024-11-20 11:00:29.351725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.182 [2024-11-20 11:00:29.351735] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.182 [2024-11-20 11:00:29.351747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.182 [2024-11-20 11:00:29.351759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:40.182 [2024-11-20 11:00:29.351769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:40.182 [2024-11-20 11:00:29.351780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:40.182 [2024-11-20 11:00:29.351791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:40.182 [2024-11-20 11:00:29.351801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:40.182 [2024-11-20 11:00:29.351811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:40.182 [2024-11-20 11:00:29.351821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:40.182 [2024-11-20 11:00:29.351831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:40.182 [2024-11-20 11:00:29.351841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:40.182 [2024-11-20 11:00:29.351852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:40.182 [2024-11-20 11:00:29.351862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:40.182 [2024-11-20 11:00:29.351873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:40.182 [2024-11-20 11:00:29.351883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:40.182 [2024-11-20 11:00:29.351893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:40.182 [2024-11-20 11:00:29.351903] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.182 [2024-11-20 11:00:29.351917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.182 [2024-11-20 11:00:29.351929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.182 [2024-11-20 11:00:29.351939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.182 [2024-11-20 11:00:29.351949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.182 [2024-11-20 11:00:29.351959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.182 [2024-11-20 11:00:29.351970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.182 [2024-11-20 11:00:29.351983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.182 [2024-11-20 11:00:29.351994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:23:40.182 [2024-11-20 11:00:29.352003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.182 [2024-11-20 11:00:29.389320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.182 [2024-11-20 11:00:29.389498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.182 [2024-11-20 11:00:29.389644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.334 ms 00:23:40.182 [2024-11-20 11:00:29.389684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.182 [2024-11-20 11:00:29.389790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.182 [2024-11-20 11:00:29.389875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.183 [2024-11-20 11:00:29.389912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:40.183 [2024-11-20 11:00:29.390201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.452713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.452886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.441 [2024-11-20 11:00:29.452981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.455 ms 00:23:40.441 [2024-11-20 11:00:29.453017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.453070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.453105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.441 [2024-11-20 11:00:29.453145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.441 [2024-11-20 11:00:29.453236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.453768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.453874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.441 [2024-11-20 11:00:29.453942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:23:40.441 [2024-11-20 11:00:29.453975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.454117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.454155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.441 [2024-11-20 11:00:29.454246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:40.441 [2024-11-20 11:00:29.454268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.473496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.473672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.441 [2024-11-20 11:00:29.473700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.232 ms 00:23:40.441 [2024-11-20 11:00:29.473710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.491976] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:40.441 [2024-11-20 11:00:29.492012] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.441 [2024-11-20 11:00:29.492026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.492036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.441 [2024-11-20 11:00:29.492047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.246 ms 00:23:40.441 [2024-11-20 11:00:29.492056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.521110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.521151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.441 [2024-11-20 11:00:29.521165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.062 ms 00:23:40.441 [2024-11-20 11:00:29.521175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.539493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.539531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.441 [2024-11-20 11:00:29.539545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.299 ms 00:23:40.441 [2024-11-20 11:00:29.539555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.557587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.557747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.441 [2024-11-20 11:00:29.557884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.024 ms 00:23:40.441 [2024-11-20 11:00:29.557920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.558645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.558774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.441 [2024-11-20 11:00:29.558850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:23:40.441 [2024-11-20 11:00:29.558892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.650314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.650575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.441 [2024-11-20 11:00:29.650726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.524 ms 00:23:40.441 [2024-11-20 11:00:29.650766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.660825] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:40.441 [2024-11-20 11:00:29.663293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.663448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.441 [2024-11-20 11:00:29.663522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.466 ms 00:23:40.441 [2024-11-20 11:00:29.663557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.441 [2024-11-20 11:00:29.663654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.441 [2024-11-20 11:00:29.663670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.441 [2024-11-20 11:00:29.663682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:40.441 [2024-11-20 11:00:29.663696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.442 [2024-11-20 11:00:29.663783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.442 [2024-11-20 11:00:29.663796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.442 [2024-11-20 11:00:29.663808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:40.442 [2024-11-20 11:00:29.663817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.442 [2024-11-20 11:00:29.663840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.442 [2024-11-20 11:00:29.663851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.442 [2024-11-20 11:00:29.663862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.442 [2024-11-20 11:00:29.663872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.442 [2024-11-20 11:00:29.663902] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.442 [2024-11-20 11:00:29.663918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.442 [2024-11-20 11:00:29.663928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.442 [2024-11-20 11:00:29.663938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:40.442 [2024-11-20 11:00:29.663948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.699 [2024-11-20 11:00:29.698980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.699 [2024-11-20 11:00:29.699134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.699 [2024-11-20 11:00:29.699210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.067 ms 00:23:40.699 [2024-11-20 11:00:29.699252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.699 [2024-11-20 11:00:29.699344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.699 [2024-11-20 11:00:29.699384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.699 [2024-11-20 11:00:29.699463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:40.699 [2024-11-20 11:00:29.699498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.699 [2024-11-20 11:00:29.700562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.184 ms, result 0 00:23:42.076  [2024-11-20T11:00:32.264Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T11:00:33.200Z] Copying: 51/1024 [MB] (25 MBps) [2024-11-20T11:00:34.136Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-20T11:00:35.072Z] Copying: 103/1024 [MB] (25 MBps) [2024-11-20T11:00:36.007Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-20T11:00:36.942Z] Copying: 154/1024 [MB] (25 MBps) [2024-11-20T11:00:38.317Z] Copying: 179/1024 [MB] (25 MBps) [2024-11-20T11:00:39.255Z] Copying: 205/1024 [MB] (25 MBps) [2024-11-20T11:00:40.216Z] Copying: 230/1024 [MB] (25 MBps) [2024-11-20T11:00:41.153Z] Copying: 255/1024 [MB] (25 MBps) [2024-11-20T11:00:42.089Z] Copying: 280/1024 [MB] (25 MBps) [2024-11-20T11:00:43.026Z] Copying: 306/1024 [MB] (25 MBps) [2024-11-20T11:00:43.960Z] Copying: 332/1024 [MB] (25 MBps) [2024-11-20T11:00:44.897Z] Copying: 357/1024 [MB] (25 MBps) [2024-11-20T11:00:46.274Z] Copying: 383/1024 [MB] (26 MBps) [2024-11-20T11:00:47.211Z] Copying: 409/1024 [MB] (25 MBps) [2024-11-20T11:00:48.147Z] Copying: 434/1024 [MB] (25 MBps) [2024-11-20T11:00:49.084Z] Copying: 460/1024 [MB] (25 MBps) [2024-11-20T11:00:50.025Z] Copying: 486/1024 [MB] (25 MBps) [2024-11-20T11:00:50.962Z] Copying: 511/1024 [MB] (25 MBps) [2024-11-20T11:00:51.899Z] Copying: 537/1024 [MB] (26 MBps) [2024-11-20T11:00:53.277Z] Copying: 563/1024 [MB] (25 MBps) [2024-11-20T11:00:54.215Z] Copying: 589/1024 [MB] (25 MBps) [2024-11-20T11:00:55.151Z] Copying: 615/1024 [MB] (26 MBps) [2024-11-20T11:00:56.089Z] Copying: 641/1024 [MB] (26 MBps) [2024-11-20T11:00:57.026Z] Copying: 667/1024 [MB] (26 MBps) [2024-11-20T11:00:57.963Z] Copying: 693/1024 [MB] (25 MBps) [2024-11-20T11:00:58.900Z] Copying: 718/1024 [MB] (25 MBps) [2024-11-20T11:01:00.275Z] Copying: 743/1024 [MB] (25 MBps) [2024-11-20T11:01:01.211Z] Copying: 768/1024 [MB] (25 MBps) [2024-11-20T11:01:02.147Z] Copying: 794/1024 [MB] (25 MBps) [2024-11-20T11:01:03.082Z] Copying: 819/1024 [MB] (25 MBps) [2024-11-20T11:01:04.016Z] Copying: 845/1024 [MB] (25 MBps) [2024-11-20T11:01:04.953Z] Copying: 870/1024 [MB] (25 MBps) [2024-11-20T11:01:05.888Z] Copying: 896/1024 [MB] (25 MBps) [2024-11-20T11:01:07.265Z] Copying: 922/1024 [MB] (25 MBps) [2024-11-20T11:01:08.202Z] Copying: 947/1024 [MB] (25 MBps) [2024-11-20T11:01:09.139Z] Copying: 973/1024 [MB] (25 MBps) [2024-11-20T11:01:10.074Z] Copying: 998/1024 [MB] (25 MBps) [2024-11-20T11:01:10.074Z] Copying: 1023/1024 [MB] (25 MBps) [2024-11-20T11:01:11.452Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 11:01:11.120512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.199 [2024-11-20 11:01:11.120812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:22.199 [2024-11-20 11:01:11.120910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:22.199 [2024-11-20 11:01:11.120950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.199 [2024-11-20 11:01:11.121020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:22.199 [2024-11-20 11:01:11.125631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.199 [2024-11-20 11:01:11.125785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:22.199 [2024-11-20 11:01:11.125879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.560 ms 00:24:22.199 [2024-11-20 11:01:11.125916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.199 [2024-11-20 11:01:11.126145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.199 [2024-11-20 11:01:11.126204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:22.199 [2024-11-20 11:01:11.126271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:24:22.199 [2024-11-20 11:01:11.126302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.199 [2024-11-20 11:01:11.129278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.199 [2024-11-20 11:01:11.129412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:22.199 [2024-11-20 11:01:11.129546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.944 ms 00:24:22.199 [2024-11-20 11:01:11.129602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.199 [2024-11-20 11:01:11.135015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.199 [2024-11-20 11:01:11.135173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:22.200 [2024-11-20 11:01:11.135620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.351 ms 00:24:22.200 [2024-11-20 11:01:11.135740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.200 [2024-11-20 11:01:11.173742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.200 [2024-11-20 11:01:11.173913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:22.200 [2024-11-20 11:01:11.173939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.955 ms 00:24:22.200 [2024-11-20 11:01:11.173954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.200 [2024-11-20 11:01:11.196194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.200 [2024-11-20 11:01:11.196359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:22.200 [2024-11-20 11:01:11.196383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.180 ms 00:24:22.200 [2024-11-20 11:01:11.196394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.200 [2024-11-20 11:01:11.196568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.200 [2024-11-20 11:01:11.196591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:22.200 [2024-11-20 11:01:11.196620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:24:22.200 [2024-11-20 11:01:11.196630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.200 [2024-11-20 11:01:11.233821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.200 [2024-11-20 11:01:11.233859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:22.200 [2024-11-20 11:01:11.233873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.234 ms 00:24:22.200 [2024-11-20 11:01:11.233883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.200 [2024-11-20 11:01:11.269043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.200 [2024-11-20 11:01:11.269092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:22.200 [2024-11-20 11:01:11.269104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.178 ms 00:24:22.200 [2024-11-20 11:01:11.269129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.200 [2024-11-20 11:01:11.303535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.200 [2024-11-20 11:01:11.303572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:22.200 [2024-11-20 11:01:11.303585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.419 ms 00:24:22.200 [2024-11-20 11:01:11.303608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.200 [2024-11-20 11:01:11.337919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.200 [2024-11-20 11:01:11.337954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:22.200 [2024-11-20 11:01:11.337967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.273 ms 00:24:22.200 [2024-11-20 11:01:11.337977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.200 [2024-11-20 11:01:11.338014] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:22.200 [2024-11-20 11:01:11.338030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:22.200 [2024-11-20 11:01:11.338746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.338998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:22.201 [2024-11-20 11:01:11.339149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:22.201 [2024-11-20 11:01:11.339163] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b1bdabc-6e72-4c1d-a5ab-4dbd8ece5857 00:24:22.201 [2024-11-20 11:01:11.339174] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:22.201 [2024-11-20 11:01:11.339184] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:22.201 [2024-11-20 11:01:11.339194] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:22.201 [2024-11-20 11:01:11.339204] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:22.201 [2024-11-20 11:01:11.339213] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:22.201 [2024-11-20 11:01:11.339222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:22.201 [2024-11-20 11:01:11.339242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:22.201 [2024-11-20 11:01:11.339251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:22.201 [2024-11-20 11:01:11.339260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:22.201 [2024-11-20 11:01:11.339270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.201 [2024-11-20 11:01:11.339280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:22.201 [2024-11-20 11:01:11.339292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:24:22.201 [2024-11-20 11:01:11.339302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.201 [2024-11-20 11:01:11.358791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.201 [2024-11-20 11:01:11.358825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:22.201 [2024-11-20 11:01:11.358837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.467 ms 00:24:22.201 [2024-11-20 11:01:11.358847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.201 [2024-11-20 11:01:11.359382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.201 [2024-11-20 11:01:11.359403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:22.201 [2024-11-20 11:01:11.359414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:24:22.201 [2024-11-20 11:01:11.359430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.201 [2024-11-20 11:01:11.407840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.201 [2024-11-20 11:01:11.407878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.201 [2024-11-20 11:01:11.407890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.201 [2024-11-20 11:01:11.407916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.201 [2024-11-20 11:01:11.407969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.201 [2024-11-20 11:01:11.407979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.201 [2024-11-20 11:01:11.407990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.201 [2024-11-20 11:01:11.408004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.201 [2024-11-20 11:01:11.408066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.201 [2024-11-20 11:01:11.408079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.201 [2024-11-20 11:01:11.408089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.201 [2024-11-20 11:01:11.408099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.201 [2024-11-20 11:01:11.408115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.201 [2024-11-20 11:01:11.408125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.201 [2024-11-20 11:01:11.408135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.201 [2024-11-20 11:01:11.408144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.524431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.461 [2024-11-20 11:01:11.524483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.461 [2024-11-20 11:01:11.524498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.461 [2024-11-20 11:01:11.524508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.617408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.461 [2024-11-20 11:01:11.617458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.461 [2024-11-20 11:01:11.617472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.461 [2024-11-20 11:01:11.617497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.617585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.461 [2024-11-20 11:01:11.617597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.461 [2024-11-20 11:01:11.617630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.461 [2024-11-20 11:01:11.617641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.617677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.461 [2024-11-20 11:01:11.617688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.461 [2024-11-20 11:01:11.617714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.461 [2024-11-20 11:01:11.617725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.617849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.461 [2024-11-20 11:01:11.617863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.461 [2024-11-20 11:01:11.617874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.461 [2024-11-20 11:01:11.617883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.617919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.461 [2024-11-20 11:01:11.617931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:22.461 [2024-11-20 11:01:11.617942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.461 [2024-11-20 11:01:11.617951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.617988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.461 [2024-11-20 11:01:11.618004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.461 [2024-11-20 11:01:11.618014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.461 [2024-11-20 11:01:11.618024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.618065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.461 [2024-11-20 11:01:11.618077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.461 [2024-11-20 11:01:11.618087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.461 [2024-11-20 11:01:11.618096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.461 [2024-11-20 11:01:11.618211] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 498.477 ms, result 0 00:24:23.413 00:24:23.413 00:24:23.413 11:01:12 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:25.372 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:25.372 11:01:14 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:25.372 [2024-11-20 11:01:14.322953] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:24:25.372 [2024-11-20 11:01:14.323062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79906 ] 00:24:25.372 [2024-11-20 11:01:14.499763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.372 [2024-11-20 11:01:14.605181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.941 [2024-11-20 11:01:14.956881] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.941 [2024-11-20 11:01:14.957127] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:25.941 [2024-11-20 11:01:15.116750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.116805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:25.941 [2024-11-20 11:01:15.116841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:25.941 [2024-11-20 11:01:15.116850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.116896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.116907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.941 [2024-11-20 11:01:15.116921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:25.941 [2024-11-20 11:01:15.116930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.116950] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:25.941 [2024-11-20 11:01:15.117930] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:25.941 [2024-11-20 11:01:15.117952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.117963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.941 [2024-11-20 11:01:15.117974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:24:25.941 [2024-11-20 11:01:15.117984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.119429] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:25.941 [2024-11-20 11:01:15.137970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.138005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:25.941 [2024-11-20 11:01:15.138019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.572 ms 00:24:25.941 [2024-11-20 11:01:15.138029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.138093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.138105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:25.941 [2024-11-20 11:01:15.138115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:25.941 [2024-11-20 11:01:15.138124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.144925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.145085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.941 [2024-11-20 11:01:15.145123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.745 ms 00:24:25.941 [2024-11-20 11:01:15.145134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.145220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.145232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.941 [2024-11-20 11:01:15.145243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:25.941 [2024-11-20 11:01:15.145253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.145293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.145304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:25.941 [2024-11-20 11:01:15.145315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:25.941 [2024-11-20 11:01:15.145324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.145348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:25.941 [2024-11-20 11:01:15.149990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.150020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.941 [2024-11-20 11:01:15.150032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:24:25.941 [2024-11-20 11:01:15.150044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.150073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.150083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:25.941 [2024-11-20 11:01:15.150092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:25.941 [2024-11-20 11:01:15.150102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.150152] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:25.941 [2024-11-20 11:01:15.150175] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:25.941 [2024-11-20 11:01:15.150207] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:25.941 [2024-11-20 11:01:15.150226] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:25.941 [2024-11-20 11:01:15.150308] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:25.941 [2024-11-20 11:01:15.150319] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:25.941 [2024-11-20 11:01:15.150332] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:25.941 [2024-11-20 11:01:15.150344] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150356] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150366] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:25.941 [2024-11-20 11:01:15.150375] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:25.941 [2024-11-20 11:01:15.150383] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:25.941 [2024-11-20 11:01:15.150393] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:25.941 [2024-11-20 11:01:15.150406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.150415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:25.941 [2024-11-20 11:01:15.150425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:24:25.941 [2024-11-20 11:01:15.150434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.150501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.941 [2024-11-20 11:01:15.150519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:25.941 [2024-11-20 11:01:15.150529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:25.941 [2024-11-20 11:01:15.150537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.941 [2024-11-20 11:01:15.150662] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:25.941 [2024-11-20 11:01:15.150681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:25.941 [2024-11-20 11:01:15.150692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:25.941 [2024-11-20 11:01:15.150721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:25.941 [2024-11-20 11:01:15.150750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.941 [2024-11-20 11:01:15.150768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:25.941 [2024-11-20 11:01:15.150777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:25.941 [2024-11-20 11:01:15.150786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:25.941 [2024-11-20 11:01:15.150795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:25.941 [2024-11-20 11:01:15.150804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:25.941 [2024-11-20 11:01:15.150821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:25.941 [2024-11-20 11:01:15.150839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:25.941 [2024-11-20 11:01:15.150866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:25.941 [2024-11-20 11:01:15.150896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:25.941 [2024-11-20 11:01:15.150923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:25.941 [2024-11-20 11:01:15.150949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:25.941 [2024-11-20 11:01:15.150958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:25.941 [2024-11-20 11:01:15.150966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:25.941 [2024-11-20 11:01:15.150975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:25.942 [2024-11-20 11:01:15.150984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.942 [2024-11-20 11:01:15.150993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:25.942 [2024-11-20 11:01:15.151002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:25.942 [2024-11-20 11:01:15.151010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:25.942 [2024-11-20 11:01:15.151019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:25.942 [2024-11-20 11:01:15.151027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:25.942 [2024-11-20 11:01:15.151036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.942 [2024-11-20 11:01:15.151045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:25.942 [2024-11-20 11:01:15.151053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:25.942 [2024-11-20 11:01:15.151063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.942 [2024-11-20 11:01:15.151072] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:25.942 [2024-11-20 11:01:15.151081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:25.942 [2024-11-20 11:01:15.151091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:25.942 [2024-11-20 11:01:15.151100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:25.942 [2024-11-20 11:01:15.151125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:25.942 [2024-11-20 11:01:15.151134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:25.942 [2024-11-20 11:01:15.151144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:25.942 [2024-11-20 11:01:15.151153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:25.942 [2024-11-20 11:01:15.151162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:25.942 [2024-11-20 11:01:15.151171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:25.942 [2024-11-20 11:01:15.151182] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:25.942 [2024-11-20 11:01:15.151193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.942 [2024-11-20 11:01:15.151204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:25.942 [2024-11-20 11:01:15.151214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:25.942 [2024-11-20 11:01:15.151224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:25.942 [2024-11-20 11:01:15.151234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:25.942 [2024-11-20 11:01:15.151244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:25.942 [2024-11-20 11:01:15.151254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:25.942 [2024-11-20 11:01:15.151264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:25.942 [2024-11-20 11:01:15.151274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:25.942 [2024-11-20 11:01:15.151285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:25.942 [2024-11-20 11:01:15.151295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:25.942 [2024-11-20 11:01:15.151306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:25.942 [2024-11-20 11:01:15.151319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:25.942 [2024-11-20 11:01:15.151329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:25.942 [2024-11-20 11:01:15.151339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:25.942 [2024-11-20 11:01:15.151349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:25.942 [2024-11-20 11:01:15.151363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:25.942 [2024-11-20 11:01:15.151374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:25.942 [2024-11-20 11:01:15.151384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:25.942 [2024-11-20 11:01:15.151394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:25.942 [2024-11-20 11:01:15.151406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:25.942 [2024-11-20 11:01:15.151417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.942 [2024-11-20 11:01:15.151427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:25.942 [2024-11-20 11:01:15.151437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:24:25.942 [2024-11-20 11:01:15.151447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.942 [2024-11-20 11:01:15.187827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.942 [2024-11-20 11:01:15.188004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.942 [2024-11-20 11:01:15.188041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.393 ms 00:24:25.942 [2024-11-20 11:01:15.188052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.942 [2024-11-20 11:01:15.188135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.942 [2024-11-20 11:01:15.188146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:25.942 [2024-11-20 11:01:15.188157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:25.942 [2024-11-20 11:01:15.188167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.267075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.267113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:26.201 [2024-11-20 11:01:15.267127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.981 ms 00:24:26.201 [2024-11-20 11:01:15.267137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.267174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.267184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:26.201 [2024-11-20 11:01:15.267194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:26.201 [2024-11-20 11:01:15.267208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.267714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.267729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:26.201 [2024-11-20 11:01:15.267740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:24:26.201 [2024-11-20 11:01:15.267749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.267864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.267878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:26.201 [2024-11-20 11:01:15.267888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:24:26.201 [2024-11-20 11:01:15.267903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.285981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.286162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:26.201 [2024-11-20 11:01:15.286191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.088 ms 00:24:26.201 [2024-11-20 11:01:15.286203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.303802] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:26.201 [2024-11-20 11:01:15.303850] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:26.201 [2024-11-20 11:01:15.303865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.303875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:26.201 [2024-11-20 11:01:15.303886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.586 ms 00:24:26.201 [2024-11-20 11:01:15.303895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.331928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.332097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:26.201 [2024-11-20 11:01:15.332135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.038 ms 00:24:26.201 [2024-11-20 11:01:15.332146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.349556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.349605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:26.201 [2024-11-20 11:01:15.349618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.397 ms 00:24:26.201 [2024-11-20 11:01:15.349627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.366690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.366832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:26.201 [2024-11-20 11:01:15.366851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.036 ms 00:24:26.201 [2024-11-20 11:01:15.366877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.367643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.367666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:26.201 [2024-11-20 11:01:15.367678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:24:26.201 [2024-11-20 11:01:15.367691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.201 [2024-11-20 11:01:15.446234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.201 [2024-11-20 11:01:15.446290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:26.201 [2024-11-20 11:01:15.446312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.648 ms 00:24:26.201 [2024-11-20 11:01:15.446322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.461 [2024-11-20 11:01:15.456483] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:26.461 [2024-11-20 11:01:15.458825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.461 [2024-11-20 11:01:15.458855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:26.461 [2024-11-20 11:01:15.458868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.477 ms 00:24:26.461 [2024-11-20 11:01:15.458877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.461 [2024-11-20 11:01:15.458950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.461 [2024-11-20 11:01:15.458963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:26.461 [2024-11-20 11:01:15.458973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:26.461 [2024-11-20 11:01:15.458986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.461 [2024-11-20 11:01:15.459053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.461 [2024-11-20 11:01:15.459065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:26.461 [2024-11-20 11:01:15.459075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:26.461 [2024-11-20 11:01:15.459084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.461 [2024-11-20 11:01:15.459104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.461 [2024-11-20 11:01:15.459114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:26.461 [2024-11-20 11:01:15.459123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:26.461 [2024-11-20 11:01:15.459132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.461 [2024-11-20 11:01:15.459166] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:26.461 [2024-11-20 11:01:15.459180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.461 [2024-11-20 11:01:15.459190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:26.461 [2024-11-20 11:01:15.459199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:26.461 [2024-11-20 11:01:15.459209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.461 [2024-11-20 11:01:15.494430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.461 [2024-11-20 11:01:15.494465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:26.461 [2024-11-20 11:01:15.494479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.259 ms 00:24:26.461 [2024-11-20 11:01:15.494518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.461 [2024-11-20 11:01:15.494608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:26.461 [2024-11-20 11:01:15.494621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:26.461 [2024-11-20 11:01:15.494633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:26.461 [2024-11-20 11:01:15.494642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:26.461 [2024-11-20 11:01:15.495709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 379.105 ms, result 0 00:24:27.399  [2024-11-20T11:01:17.587Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-20T11:01:18.522Z] Copying: 45/1024 [MB] (23 MBps) [2024-11-20T11:01:19.896Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-20T11:01:20.831Z] Copying: 93/1024 [MB] (23 MBps) [2024-11-20T11:01:21.766Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-20T11:01:22.702Z] Copying: 140/1024 [MB] (23 MBps) [2024-11-20T11:01:23.638Z] Copying: 163/1024 [MB] (23 MBps) [2024-11-20T11:01:24.575Z] Copying: 186/1024 [MB] (23 MBps) [2024-11-20T11:01:25.511Z] Copying: 210/1024 [MB] (23 MBps) [2024-11-20T11:01:26.884Z] Copying: 233/1024 [MB] (23 MBps) [2024-11-20T11:01:27.820Z] Copying: 256/1024 [MB] (23 MBps) [2024-11-20T11:01:28.757Z] Copying: 280/1024 [MB] (23 MBps) [2024-11-20T11:01:29.708Z] Copying: 303/1024 [MB] (23 MBps) [2024-11-20T11:01:30.645Z] Copying: 326/1024 [MB] (23 MBps) [2024-11-20T11:01:31.582Z] Copying: 349/1024 [MB] (23 MBps) [2024-11-20T11:01:32.517Z] Copying: 373/1024 [MB] (23 MBps) [2024-11-20T11:01:33.894Z] Copying: 396/1024 [MB] (23 MBps) [2024-11-20T11:01:34.830Z] Copying: 420/1024 [MB] (23 MBps) [2024-11-20T11:01:35.765Z] Copying: 443/1024 [MB] (23 MBps) [2024-11-20T11:01:36.702Z] Copying: 467/1024 [MB] (23 MBps) [2024-11-20T11:01:37.638Z] Copying: 490/1024 [MB] (23 MBps) [2024-11-20T11:01:38.618Z] Copying: 513/1024 [MB] (23 MBps) [2024-11-20T11:01:39.552Z] Copying: 537/1024 [MB] (23 MBps) [2024-11-20T11:01:40.487Z] Copying: 560/1024 [MB] (23 MBps) [2024-11-20T11:01:41.862Z] Copying: 584/1024 [MB] (23 MBps) [2024-11-20T11:01:42.795Z] Copying: 607/1024 [MB] (22 MBps) [2024-11-20T11:01:43.729Z] Copying: 630/1024 [MB] (23 MBps) [2024-11-20T11:01:44.664Z] Copying: 653/1024 [MB] (23 MBps) [2024-11-20T11:01:45.599Z] Copying: 677/1024 [MB] (23 MBps) [2024-11-20T11:01:46.533Z] Copying: 703/1024 [MB] (25 MBps) [2024-11-20T11:01:47.466Z] Copying: 727/1024 [MB] (24 MBps) [2024-11-20T11:01:48.840Z] Copying: 751/1024 [MB] (24 MBps) [2024-11-20T11:01:49.771Z] Copying: 775/1024 [MB] (24 MBps) [2024-11-20T11:01:50.705Z] Copying: 800/1024 [MB] (24 MBps) [2024-11-20T11:01:51.639Z] Copying: 824/1024 [MB] (24 MBps) [2024-11-20T11:01:52.573Z] Copying: 847/1024 [MB] (23 MBps) [2024-11-20T11:01:53.507Z] Copying: 871/1024 [MB] (24 MBps) [2024-11-20T11:01:54.883Z] Copying: 895/1024 [MB] (23 MBps) [2024-11-20T11:01:55.449Z] Copying: 919/1024 [MB] (23 MBps) [2024-11-20T11:01:56.825Z] Copying: 942/1024 [MB] (23 MBps) [2024-11-20T11:01:57.760Z] Copying: 966/1024 [MB] (23 MBps) [2024-11-20T11:01:58.694Z] Copying: 990/1024 [MB] (23 MBps) [2024-11-20T11:01:59.630Z] Copying: 1013/1024 [MB] (23 MBps) [2024-11-20T11:01:59.630Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-20T11:01:59.630Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-20 11:01:59.556805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.377 [2024-11-20 11:01:59.556864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:10.377 [2024-11-20 11:01:59.556881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:10.377 [2024-11-20 11:01:59.556915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.377 [2024-11-20 11:01:59.558529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:10.377 [2024-11-20 11:01:59.563499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.377 [2024-11-20 11:01:59.563538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:10.377 [2024-11-20 11:01:59.563552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.939 ms 00:25:10.377 [2024-11-20 11:01:59.563563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.377 [2024-11-20 11:01:59.575053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.377 [2024-11-20 11:01:59.575232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:10.377 [2024-11-20 11:01:59.575256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.212 ms 00:25:10.377 [2024-11-20 11:01:59.575267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.377 [2024-11-20 11:01:59.597771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.377 [2024-11-20 11:01:59.597919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:10.377 [2024-11-20 11:01:59.597941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.512 ms 00:25:10.377 [2024-11-20 11:01:59.597952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.377 [2024-11-20 11:01:59.602796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.377 [2024-11-20 11:01:59.602827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:10.377 [2024-11-20 11:01:59.602839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.816 ms 00:25:10.377 [2024-11-20 11:01:59.602849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.636 [2024-11-20 11:01:59.637639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.636 [2024-11-20 11:01:59.637674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:10.636 [2024-11-20 11:01:59.637687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.787 ms 00:25:10.636 [2024-11-20 11:01:59.637713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.636 [2024-11-20 11:01:59.656967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.636 [2024-11-20 11:01:59.657108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:10.636 [2024-11-20 11:01:59.657128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.250 ms 00:25:10.636 [2024-11-20 11:01:59.657161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.636 [2024-11-20 11:01:59.772488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.636 [2024-11-20 11:01:59.772651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:10.636 [2024-11-20 11:01:59.772672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.476 ms 00:25:10.636 [2024-11-20 11:01:59.772684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.636 [2024-11-20 11:01:59.807582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.636 [2024-11-20 11:01:59.807617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:10.636 [2024-11-20 11:01:59.807629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.933 ms 00:25:10.636 [2024-11-20 11:01:59.807639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.636 [2024-11-20 11:01:59.842122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.636 [2024-11-20 11:01:59.842166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:10.636 [2024-11-20 11:01:59.842178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.505 ms 00:25:10.636 [2024-11-20 11:01:59.842187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.636 [2024-11-20 11:01:59.877208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.636 [2024-11-20 11:01:59.877344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:10.636 [2024-11-20 11:01:59.877364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.044 ms 00:25:10.636 [2024-11-20 11:01:59.877374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.896 [2024-11-20 11:01:59.912968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.896 [2024-11-20 11:01:59.913002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:10.896 [2024-11-20 11:01:59.913014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.581 ms 00:25:10.896 [2024-11-20 11:01:59.913024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.896 [2024-11-20 11:01:59.913059] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:10.896 [2024-11-20 11:01:59.913075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 100096 / 261120 wr_cnt: 1 state: open 00:25:10.896 [2024-11-20 11:01:59.913087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:10.896 [2024-11-20 11:01:59.913928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.913939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.913949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.913959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.913969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.913979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.913989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:10.897 [2024-11-20 11:01:59.914130] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:10.897 [2024-11-20 11:01:59.914139] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b1bdabc-6e72-4c1d-a5ab-4dbd8ece5857 00:25:10.897 [2024-11-20 11:01:59.914150] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 100096 00:25:10.897 [2024-11-20 11:01:59.914159] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 101056 00:25:10.897 [2024-11-20 11:01:59.914168] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 100096 00:25:10.897 [2024-11-20 11:01:59.914179] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:25:10.897 [2024-11-20 11:01:59.914188] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:10.897 [2024-11-20 11:01:59.914204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:10.897 [2024-11-20 11:01:59.914224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:10.897 [2024-11-20 11:01:59.914233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:10.897 [2024-11-20 11:01:59.914241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:10.897 [2024-11-20 11:01:59.914251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.897 [2024-11-20 11:01:59.914260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:10.897 [2024-11-20 11:01:59.914271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:25:10.897 [2024-11-20 11:01:59.914280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.897 [2024-11-20 11:01:59.933931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.897 [2024-11-20 11:01:59.933962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:10.897 [2024-11-20 11:01:59.933981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.650 ms 00:25:10.897 [2024-11-20 11:01:59.934013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.897 [2024-11-20 11:01:59.934571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.897 [2024-11-20 11:01:59.934586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:10.897 [2024-11-20 11:01:59.934607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:25:10.897 [2024-11-20 11:01:59.934617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.897 [2024-11-20 11:01:59.986046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.897 [2024-11-20 11:01:59.986088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.897 [2024-11-20 11:01:59.986105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.897 [2024-11-20 11:01:59.986115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.897 [2024-11-20 11:01:59.986165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.897 [2024-11-20 11:01:59.986175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.897 [2024-11-20 11:01:59.986185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.897 [2024-11-20 11:01:59.986194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.897 [2024-11-20 11:01:59.986252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.897 [2024-11-20 11:01:59.986265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.897 [2024-11-20 11:01:59.986275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.897 [2024-11-20 11:01:59.986288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.897 [2024-11-20 11:01:59.986304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.897 [2024-11-20 11:01:59.986315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.897 [2024-11-20 11:01:59.986324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.897 [2024-11-20 11:01:59.986334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.897 [2024-11-20 11:02:00.107098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.897 [2024-11-20 11:02:00.107147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.897 [2024-11-20 11:02:00.107184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.897 [2024-11-20 11:02:00.107195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.156 [2024-11-20 11:02:00.200622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.156 [2024-11-20 11:02:00.200662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.156 [2024-11-20 11:02:00.200675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.156 [2024-11-20 11:02:00.200686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.156 [2024-11-20 11:02:00.200769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.156 [2024-11-20 11:02:00.200781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:11.156 [2024-11-20 11:02:00.200791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.156 [2024-11-20 11:02:00.200800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.156 [2024-11-20 11:02:00.200841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.156 [2024-11-20 11:02:00.200852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:11.156 [2024-11-20 11:02:00.200861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.156 [2024-11-20 11:02:00.200870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.156 [2024-11-20 11:02:00.200972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.156 [2024-11-20 11:02:00.200985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:11.156 [2024-11-20 11:02:00.200995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.156 [2024-11-20 11:02:00.201003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.156 [2024-11-20 11:02:00.201040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.156 [2024-11-20 11:02:00.201051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:11.156 [2024-11-20 11:02:00.201061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.156 [2024-11-20 11:02:00.201071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.156 [2024-11-20 11:02:00.201105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.156 [2024-11-20 11:02:00.201116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:11.156 [2024-11-20 11:02:00.201125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.156 [2024-11-20 11:02:00.201134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.156 [2024-11-20 11:02:00.201180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.156 [2024-11-20 11:02:00.201191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:11.156 [2024-11-20 11:02:00.201201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.156 [2024-11-20 11:02:00.201209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.156 [2024-11-20 11:02:00.201349] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 647.921 ms, result 0 00:25:13.058 00:25:13.058 00:25:13.058 11:02:01 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:13.058 [2024-11-20 11:02:02.055646] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:25:13.058 [2024-11-20 11:02:02.055767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80380 ] 00:25:13.058 [2024-11-20 11:02:02.233571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.316 [2024-11-20 11:02:02.343147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.574 [2024-11-20 11:02:02.666545] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.574 [2024-11-20 11:02:02.666644] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:13.574 [2024-11-20 11:02:02.825829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.826074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:13.835 [2024-11-20 11:02:02.826105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:13.835 [2024-11-20 11:02:02.826116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.826172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.826185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:13.835 [2024-11-20 11:02:02.826199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:13.835 [2024-11-20 11:02:02.826209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.826230] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:13.835 [2024-11-20 11:02:02.827190] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:13.835 [2024-11-20 11:02:02.827224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.827235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:13.835 [2024-11-20 11:02:02.827246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:25:13.835 [2024-11-20 11:02:02.827256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.828699] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:13.835 [2024-11-20 11:02:02.846580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.846628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:13.835 [2024-11-20 11:02:02.846642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.911 ms 00:25:13.835 [2024-11-20 11:02:02.846652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.846713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.846724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:13.835 [2024-11-20 11:02:02.846735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:13.835 [2024-11-20 11:02:02.846744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.853334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.853471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:13.835 [2024-11-20 11:02:02.853490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.536 ms 00:25:13.835 [2024-11-20 11:02:02.853517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.853603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.853633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:13.835 [2024-11-20 11:02:02.853644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:13.835 [2024-11-20 11:02:02.853655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.853695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.853706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:13.835 [2024-11-20 11:02:02.853717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:13.835 [2024-11-20 11:02:02.853727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.853749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:13.835 [2024-11-20 11:02:02.858340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.858367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:13.835 [2024-11-20 11:02:02.858379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.603 ms 00:25:13.835 [2024-11-20 11:02:02.858391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.858419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.858429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:13.835 [2024-11-20 11:02:02.858439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:13.835 [2024-11-20 11:02:02.858448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.858496] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:13.835 [2024-11-20 11:02:02.858525] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:13.835 [2024-11-20 11:02:02.858557] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:13.835 [2024-11-20 11:02:02.858576] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:13.835 [2024-11-20 11:02:02.858665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:13.835 [2024-11-20 11:02:02.858678] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:13.835 [2024-11-20 11:02:02.858690] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:13.835 [2024-11-20 11:02:02.858702] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:13.835 [2024-11-20 11:02:02.858714] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:13.835 [2024-11-20 11:02:02.858724] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:13.835 [2024-11-20 11:02:02.858733] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:13.835 [2024-11-20 11:02:02.858742] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:13.835 [2024-11-20 11:02:02.858751] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:13.835 [2024-11-20 11:02:02.858764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.858774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:13.835 [2024-11-20 11:02:02.858783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:25:13.835 [2024-11-20 11:02:02.858793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.858874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.835 [2024-11-20 11:02:02.858885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:13.835 [2024-11-20 11:02:02.858895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:13.835 [2024-11-20 11:02:02.858904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.835 [2024-11-20 11:02:02.858994] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:13.835 [2024-11-20 11:02:02.859011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:13.835 [2024-11-20 11:02:02.859022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.835 [2024-11-20 11:02:02.859032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.835 [2024-11-20 11:02:02.859042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:13.835 [2024-11-20 11:02:02.859051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:13.835 [2024-11-20 11:02:02.859060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:13.835 [2024-11-20 11:02:02.859070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:13.835 [2024-11-20 11:02:02.859080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.836 [2024-11-20 11:02:02.859100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:13.836 [2024-11-20 11:02:02.859109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:13.836 [2024-11-20 11:02:02.859117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:13.836 [2024-11-20 11:02:02.859127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:13.836 [2024-11-20 11:02:02.859136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:13.836 [2024-11-20 11:02:02.859153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:13.836 [2024-11-20 11:02:02.859171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:13.836 [2024-11-20 11:02:02.859180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:13.836 [2024-11-20 11:02:02.859198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.836 [2024-11-20 11:02:02.859215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:13.836 [2024-11-20 11:02:02.859225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.836 [2024-11-20 11:02:02.859242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:13.836 [2024-11-20 11:02:02.859251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.836 [2024-11-20 11:02:02.859268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:13.836 [2024-11-20 11:02:02.859278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:13.836 [2024-11-20 11:02:02.859296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:13.836 [2024-11-20 11:02:02.859304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.836 [2024-11-20 11:02:02.859322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:13.836 [2024-11-20 11:02:02.859330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:13.836 [2024-11-20 11:02:02.859339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:13.836 [2024-11-20 11:02:02.859348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:13.836 [2024-11-20 11:02:02.859357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:13.836 [2024-11-20 11:02:02.859366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:13.836 [2024-11-20 11:02:02.859384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:13.836 [2024-11-20 11:02:02.859394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859402] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:13.836 [2024-11-20 11:02:02.859412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:13.836 [2024-11-20 11:02:02.859421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:13.836 [2024-11-20 11:02:02.859430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:13.836 [2024-11-20 11:02:02.859440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:13.836 [2024-11-20 11:02:02.859449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:13.836 [2024-11-20 11:02:02.859458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:13.836 [2024-11-20 11:02:02.859468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:13.836 [2024-11-20 11:02:02.859476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:13.836 [2024-11-20 11:02:02.859485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:13.836 [2024-11-20 11:02:02.859495] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:13.836 [2024-11-20 11:02:02.859507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.836 [2024-11-20 11:02:02.859518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:13.836 [2024-11-20 11:02:02.859528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:13.836 [2024-11-20 11:02:02.859538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:13.836 [2024-11-20 11:02:02.859548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:13.836 [2024-11-20 11:02:02.859557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:13.836 [2024-11-20 11:02:02.859567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:13.836 [2024-11-20 11:02:02.859577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:13.836 [2024-11-20 11:02:02.859586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:13.836 [2024-11-20 11:02:02.859596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:13.836 [2024-11-20 11:02:02.859605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:13.836 [2024-11-20 11:02:02.859626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:13.836 [2024-11-20 11:02:02.859635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:13.836 [2024-11-20 11:02:02.859645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:13.836 [2024-11-20 11:02:02.859655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:13.836 [2024-11-20 11:02:02.859666] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:13.836 [2024-11-20 11:02:02.859680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:13.836 [2024-11-20 11:02:02.859692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:13.836 [2024-11-20 11:02:02.859702] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:13.836 [2024-11-20 11:02:02.859714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:13.836 [2024-11-20 11:02:02.859723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:13.836 [2024-11-20 11:02:02.859734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.836 [2024-11-20 11:02:02.859744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:13.836 [2024-11-20 11:02:02.859753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:25:13.836 [2024-11-20 11:02:02.859763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.836 [2024-11-20 11:02:02.896797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.836 [2024-11-20 11:02:02.896955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:13.836 [2024-11-20 11:02:02.897052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.052 ms 00:25:13.836 [2024-11-20 11:02:02.897088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.836 [2024-11-20 11:02:02.897190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.836 [2024-11-20 11:02:02.897222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:13.836 [2024-11-20 11:02:02.897306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:13.836 [2024-11-20 11:02:02.897341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.836 [2024-11-20 11:02:02.948913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.836 [2024-11-20 11:02:02.949068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:13.836 [2024-11-20 11:02:02.949165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.576 ms 00:25:13.836 [2024-11-20 11:02:02.949202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.836 [2024-11-20 11:02:02.949256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.836 [2024-11-20 11:02:02.949288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:13.836 [2024-11-20 11:02:02.949319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:13.836 [2024-11-20 11:02:02.949409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.836 [2024-11-20 11:02:02.949930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.836 [2024-11-20 11:02:02.950038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:13.836 [2024-11-20 11:02:02.950111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:25:13.836 [2024-11-20 11:02:02.950145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.836 [2024-11-20 11:02:02.950287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.836 [2024-11-20 11:02:02.950325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:13.836 [2024-11-20 11:02:02.950420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:13.836 [2024-11-20 11:02:02.950463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.836 [2024-11-20 11:02:02.968547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.836 [2024-11-20 11:02:02.968699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:13.836 [2024-11-20 11:02:02.968799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.060 ms 00:25:13.836 [2024-11-20 11:02:02.968835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.837 [2024-11-20 11:02:02.987341] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:13.837 [2024-11-20 11:02:02.987496] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:13.837 [2024-11-20 11:02:02.987618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.837 [2024-11-20 11:02:02.987653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:13.837 [2024-11-20 11:02:02.987685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.693 ms 00:25:13.837 [2024-11-20 11:02:02.987713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.837 [2024-11-20 11:02:03.017276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.837 [2024-11-20 11:02:03.017422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:13.837 [2024-11-20 11:02:03.017492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.554 ms 00:25:13.837 [2024-11-20 11:02:03.017529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.837 [2024-11-20 11:02:03.035880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.837 [2024-11-20 11:02:03.036021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:13.837 [2024-11-20 11:02:03.036092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.293 ms 00:25:13.837 [2024-11-20 11:02:03.036126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.837 [2024-11-20 11:02:03.053241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.837 [2024-11-20 11:02:03.053400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:13.837 [2024-11-20 11:02:03.053473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.064 ms 00:25:13.837 [2024-11-20 11:02:03.053508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:13.837 [2024-11-20 11:02:03.054992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:13.837 [2024-11-20 11:02:03.055126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:13.837 [2024-11-20 11:02:03.055201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:25:13.837 [2024-11-20 11:02:03.055222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.136015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.095 [2024-11-20 11:02:03.136241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:14.095 [2024-11-20 11:02:03.136353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.898 ms 00:25:14.095 [2024-11-20 11:02:03.136391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.146538] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:14.095 [2024-11-20 11:02:03.148955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.095 [2024-11-20 11:02:03.148981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:14.095 [2024-11-20 11:02:03.148993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.468 ms 00:25:14.095 [2024-11-20 11:02:03.149003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.149076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.095 [2024-11-20 11:02:03.149089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:14.095 [2024-11-20 11:02:03.149100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:14.095 [2024-11-20 11:02:03.149115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.150526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.095 [2024-11-20 11:02:03.150562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:14.095 [2024-11-20 11:02:03.150574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:25:14.095 [2024-11-20 11:02:03.150584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.150624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.095 [2024-11-20 11:02:03.150635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:14.095 [2024-11-20 11:02:03.150646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:14.095 [2024-11-20 11:02:03.150656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.150700] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:14.095 [2024-11-20 11:02:03.150718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.095 [2024-11-20 11:02:03.150728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:14.095 [2024-11-20 11:02:03.150739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:14.095 [2024-11-20 11:02:03.150749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.184994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.095 [2024-11-20 11:02:03.185157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:14.095 [2024-11-20 11:02:03.185243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.281 ms 00:25:14.095 [2024-11-20 11:02:03.185293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.185412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:14.095 [2024-11-20 11:02:03.185452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:14.095 [2024-11-20 11:02:03.185535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:14.095 [2024-11-20 11:02:03.185569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:14.095 [2024-11-20 11:02:03.186664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.956 ms, result 0 00:25:15.497  [2024-11-20T11:02:05.744Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-20T11:02:06.681Z] Copying: 46/1024 [MB] (25 MBps) [2024-11-20T11:02:07.617Z] Copying: 71/1024 [MB] (25 MBps) [2024-11-20T11:02:08.553Z] Copying: 97/1024 [MB] (25 MBps) [2024-11-20T11:02:09.489Z] Copying: 123/1024 [MB] (25 MBps) [2024-11-20T11:02:10.424Z] Copying: 148/1024 [MB] (25 MBps) [2024-11-20T11:02:11.803Z] Copying: 174/1024 [MB] (25 MBps) [2024-11-20T11:02:12.740Z] Copying: 199/1024 [MB] (25 MBps) [2024-11-20T11:02:13.677Z] Copying: 225/1024 [MB] (25 MBps) [2024-11-20T11:02:14.613Z] Copying: 250/1024 [MB] (25 MBps) [2024-11-20T11:02:15.550Z] Copying: 276/1024 [MB] (25 MBps) [2024-11-20T11:02:16.486Z] Copying: 302/1024 [MB] (26 MBps) [2024-11-20T11:02:17.421Z] Copying: 329/1024 [MB] (26 MBps) [2024-11-20T11:02:18.797Z] Copying: 355/1024 [MB] (26 MBps) [2024-11-20T11:02:19.734Z] Copying: 381/1024 [MB] (26 MBps) [2024-11-20T11:02:20.671Z] Copying: 407/1024 [MB] (25 MBps) [2024-11-20T11:02:21.610Z] Copying: 433/1024 [MB] (25 MBps) [2024-11-20T11:02:22.550Z] Copying: 458/1024 [MB] (25 MBps) [2024-11-20T11:02:23.488Z] Copying: 484/1024 [MB] (25 MBps) [2024-11-20T11:02:24.426Z] Copying: 510/1024 [MB] (26 MBps) [2024-11-20T11:02:25.363Z] Copying: 537/1024 [MB] (26 MBps) [2024-11-20T11:02:26.743Z] Copying: 563/1024 [MB] (26 MBps) [2024-11-20T11:02:27.696Z] Copying: 589/1024 [MB] (25 MBps) [2024-11-20T11:02:28.669Z] Copying: 615/1024 [MB] (26 MBps) [2024-11-20T11:02:29.606Z] Copying: 641/1024 [MB] (26 MBps) [2024-11-20T11:02:30.540Z] Copying: 667/1024 [MB] (26 MBps) [2024-11-20T11:02:31.478Z] Copying: 693/1024 [MB] (26 MBps) [2024-11-20T11:02:32.416Z] Copying: 720/1024 [MB] (26 MBps) [2024-11-20T11:02:33.354Z] Copying: 746/1024 [MB] (26 MBps) [2024-11-20T11:02:34.731Z] Copying: 772/1024 [MB] (25 MBps) [2024-11-20T11:02:35.667Z] Copying: 797/1024 [MB] (25 MBps) [2024-11-20T11:02:36.606Z] Copying: 823/1024 [MB] (25 MBps) [2024-11-20T11:02:37.544Z] Copying: 853/1024 [MB] (30 MBps) [2024-11-20T11:02:38.483Z] Copying: 880/1024 [MB] (26 MBps) [2024-11-20T11:02:39.420Z] Copying: 907/1024 [MB] (26 MBps) [2024-11-20T11:02:40.358Z] Copying: 934/1024 [MB] (26 MBps) [2024-11-20T11:02:41.739Z] Copying: 961/1024 [MB] (26 MBps) [2024-11-20T11:02:42.677Z] Copying: 988/1024 [MB] (27 MBps) [2024-11-20T11:02:42.677Z] Copying: 1015/1024 [MB] (26 MBps) [2024-11-20T11:02:42.677Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 11:02:42.660646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.424 [2024-11-20 11:02:42.660708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.424 [2024-11-20 11:02:42.660752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:53.424 [2024-11-20 11:02:42.660763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.424 [2024-11-20 11:02:42.660791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.424 [2024-11-20 11:02:42.665332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.424 [2024-11-20 11:02:42.665472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.425 [2024-11-20 11:02:42.665609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.529 ms 00:25:53.425 [2024-11-20 11:02:42.665648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.425 [2024-11-20 11:02:42.665866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.425 [2024-11-20 11:02:42.665973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.425 [2024-11-20 11:02:42.666032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:25:53.425 [2024-11-20 11:02:42.666062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.425 [2024-11-20 11:02:42.670944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.425 [2024-11-20 11:02:42.671101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.425 [2024-11-20 11:02:42.671191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.845 ms 00:25:53.425 [2024-11-20 11:02:42.671208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.684 [2024-11-20 11:02:42.676682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.684 [2024-11-20 11:02:42.676718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:53.684 [2024-11-20 11:02:42.676730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.444 ms 00:25:53.684 [2024-11-20 11:02:42.676753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.684 [2024-11-20 11:02:42.714116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.684 [2024-11-20 11:02:42.714157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.684 [2024-11-20 11:02:42.714172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.377 ms 00:25:53.684 [2024-11-20 11:02:42.714183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.684 [2024-11-20 11:02:42.734739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.684 [2024-11-20 11:02:42.734803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.684 [2024-11-20 11:02:42.734819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.551 ms 00:25:53.684 [2024-11-20 11:02:42.734829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.684 [2024-11-20 11:02:42.870939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.684 [2024-11-20 11:02:42.871026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.684 [2024-11-20 11:02:42.871044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 136.281 ms 00:25:53.684 [2024-11-20 11:02:42.871055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.684 [2024-11-20 11:02:42.908563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.684 [2024-11-20 11:02:42.908620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:53.684 [2024-11-20 11:02:42.908653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.549 ms 00:25:53.684 [2024-11-20 11:02:42.908664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.943 [2024-11-20 11:02:42.943798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.943 [2024-11-20 11:02:42.943845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:53.943 [2024-11-20 11:02:42.943872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.146 ms 00:25:53.944 [2024-11-20 11:02:42.943883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.944 [2024-11-20 11:02:42.980641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.944 [2024-11-20 11:02:42.980689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.944 [2024-11-20 11:02:42.980705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.774 ms 00:25:53.944 [2024-11-20 11:02:42.980716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.944 [2024-11-20 11:02:43.016374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.944 [2024-11-20 11:02:43.016419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.944 [2024-11-20 11:02:43.016434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.625 ms 00:25:53.944 [2024-11-20 11:02:43.016444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.944 [2024-11-20 11:02:43.016485] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.944 [2024-11-20 11:02:43.016503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:53.944 [2024-11-20 11:02:43.016517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.016989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.944 [2024-11-20 11:02:43.017297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.945 [2024-11-20 11:02:43.017613] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.945 [2024-11-20 11:02:43.017623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b1bdabc-6e72-4c1d-a5ab-4dbd8ece5857 00:25:53.945 [2024-11-20 11:02:43.017634] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:53.945 [2024-11-20 11:02:43.017644] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 31936 00:25:53.945 [2024-11-20 11:02:43.017653] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 30976 00:25:53.945 [2024-11-20 11:02:43.017664] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0310 00:25:53.945 [2024-11-20 11:02:43.017673] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.945 [2024-11-20 11:02:43.017690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.945 [2024-11-20 11:02:43.017700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.945 [2024-11-20 11:02:43.017720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.945 [2024-11-20 11:02:43.017729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.945 [2024-11-20 11:02:43.017739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.945 [2024-11-20 11:02:43.017750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.945 [2024-11-20 11:02:43.017760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:25:53.945 [2024-11-20 11:02:43.017770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.945 [2024-11-20 11:02:43.037866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.945 [2024-11-20 11:02:43.037903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.945 [2024-11-20 11:02:43.037916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.090 ms 00:25:53.945 [2024-11-20 11:02:43.037948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.945 [2024-11-20 11:02:43.038522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.945 [2024-11-20 11:02:43.038547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.945 [2024-11-20 11:02:43.038560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:25:53.945 [2024-11-20 11:02:43.038570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.945 [2024-11-20 11:02:43.089899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.945 [2024-11-20 11:02:43.089939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.945 [2024-11-20 11:02:43.089959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.945 [2024-11-20 11:02:43.089969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.945 [2024-11-20 11:02:43.090045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.945 [2024-11-20 11:02:43.090056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.945 [2024-11-20 11:02:43.090066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.945 [2024-11-20 11:02:43.090077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.945 [2024-11-20 11:02:43.090171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.945 [2024-11-20 11:02:43.090185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.945 [2024-11-20 11:02:43.090196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.945 [2024-11-20 11:02:43.090210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.945 [2024-11-20 11:02:43.090227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.945 [2024-11-20 11:02:43.090237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.945 [2024-11-20 11:02:43.090248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.945 [2024-11-20 11:02:43.090257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.213524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.204 [2024-11-20 11:02:43.213573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:54.204 [2024-11-20 11:02:43.213610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.204 [2024-11-20 11:02:43.213622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.312996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.204 [2024-11-20 11:02:43.313054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:54.204 [2024-11-20 11:02:43.313071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.204 [2024-11-20 11:02:43.313081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.313175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.204 [2024-11-20 11:02:43.313187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.204 [2024-11-20 11:02:43.313198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.204 [2024-11-20 11:02:43.313207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.313251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.204 [2024-11-20 11:02:43.313262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.204 [2024-11-20 11:02:43.313272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.204 [2024-11-20 11:02:43.313282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.313393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.204 [2024-11-20 11:02:43.313406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.204 [2024-11-20 11:02:43.313416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.204 [2024-11-20 11:02:43.313426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.313464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.204 [2024-11-20 11:02:43.313476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:54.204 [2024-11-20 11:02:43.313487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.204 [2024-11-20 11:02:43.313496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.313534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.204 [2024-11-20 11:02:43.313545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.204 [2024-11-20 11:02:43.313556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.204 [2024-11-20 11:02:43.313565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.313671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.204 [2024-11-20 11:02:43.313685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.204 [2024-11-20 11:02:43.313695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.204 [2024-11-20 11:02:43.313705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.204 [2024-11-20 11:02:43.313859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 654.232 ms, result 0 00:25:55.143 00:25:55.143 00:25:55.143 11:02:44 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:57.051 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78739 00:25:57.051 11:02:46 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78739 ']' 00:25:57.051 11:02:46 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78739 00:25:57.051 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78739) - No such process 00:25:57.051 Process with pid 78739 is not found 00:25:57.051 Remove shared memory files 00:25:57.051 11:02:46 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78739 is not found' 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:57.051 11:02:46 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:57.051 ************************************ 00:25:57.051 END TEST ftl_restore 00:25:57.051 ************************************ 00:25:57.051 00:25:57.051 real 3m23.708s 00:25:57.051 user 3m11.060s 00:25:57.051 sys 0m13.167s 00:25:57.051 11:02:46 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.051 11:02:46 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:57.311 11:02:46 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:57.311 11:02:46 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:57.311 11:02:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.311 11:02:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:57.311 ************************************ 00:25:57.311 START TEST ftl_dirty_shutdown 00:25:57.312 ************************************ 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:57.312 * Looking for test storage... 00:25:57.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:57.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.312 --rc genhtml_branch_coverage=1 00:25:57.312 --rc genhtml_function_coverage=1 00:25:57.312 --rc genhtml_legend=1 00:25:57.312 --rc geninfo_all_blocks=1 00:25:57.312 --rc geninfo_unexecuted_blocks=1 00:25:57.312 00:25:57.312 ' 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:57.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.312 --rc genhtml_branch_coverage=1 00:25:57.312 --rc genhtml_function_coverage=1 00:25:57.312 --rc genhtml_legend=1 00:25:57.312 --rc geninfo_all_blocks=1 00:25:57.312 --rc geninfo_unexecuted_blocks=1 00:25:57.312 00:25:57.312 ' 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:57.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.312 --rc genhtml_branch_coverage=1 00:25:57.312 --rc genhtml_function_coverage=1 00:25:57.312 --rc genhtml_legend=1 00:25:57.312 --rc geninfo_all_blocks=1 00:25:57.312 --rc geninfo_unexecuted_blocks=1 00:25:57.312 00:25:57.312 ' 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:57.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.312 --rc genhtml_branch_coverage=1 00:25:57.312 --rc genhtml_function_coverage=1 00:25:57.312 --rc genhtml_legend=1 00:25:57.312 --rc geninfo_all_blocks=1 00:25:57.312 --rc geninfo_unexecuted_blocks=1 00:25:57.312 00:25:57.312 ' 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:57.312 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80900 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80900 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80900 ']' 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:57.572 11:02:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:57.572 [2024-11-20 11:02:46.686681] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:25:57.572 [2024-11-20 11:02:46.687075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80900 ] 00:25:57.831 [2024-11-20 11:02:46.868715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.831 [2024-11-20 11:02:46.982681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.770 11:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.770 11:02:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:58.770 11:02:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:58.770 11:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:58.770 11:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:58.770 11:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:58.770 11:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:58.770 11:02:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:59.030 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:59.030 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:59.030 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:59.030 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:59.030 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:59.030 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:59.030 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:59.030 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:59.291 { 00:25:59.291 "name": "nvme0n1", 00:25:59.291 "aliases": [ 00:25:59.291 "16b4fb40-def4-4280-ad42-8b3e8cccad39" 00:25:59.291 ], 00:25:59.291 "product_name": "NVMe disk", 00:25:59.291 "block_size": 4096, 00:25:59.291 "num_blocks": 1310720, 00:25:59.291 "uuid": "16b4fb40-def4-4280-ad42-8b3e8cccad39", 00:25:59.291 "numa_id": -1, 00:25:59.291 "assigned_rate_limits": { 00:25:59.291 "rw_ios_per_sec": 0, 00:25:59.291 "rw_mbytes_per_sec": 0, 00:25:59.291 "r_mbytes_per_sec": 0, 00:25:59.291 "w_mbytes_per_sec": 0 00:25:59.291 }, 00:25:59.291 "claimed": true, 00:25:59.291 "claim_type": "read_many_write_one", 00:25:59.291 "zoned": false, 00:25:59.291 "supported_io_types": { 00:25:59.291 "read": true, 00:25:59.291 "write": true, 00:25:59.291 "unmap": true, 00:25:59.291 "flush": true, 00:25:59.291 "reset": true, 00:25:59.291 "nvme_admin": true, 00:25:59.291 "nvme_io": true, 00:25:59.291 "nvme_io_md": false, 00:25:59.291 "write_zeroes": true, 00:25:59.291 "zcopy": false, 00:25:59.291 "get_zone_info": false, 00:25:59.291 "zone_management": false, 00:25:59.291 "zone_append": false, 00:25:59.291 "compare": true, 00:25:59.291 "compare_and_write": false, 00:25:59.291 "abort": true, 00:25:59.291 "seek_hole": false, 00:25:59.291 "seek_data": false, 00:25:59.291 "copy": true, 00:25:59.291 "nvme_iov_md": false 00:25:59.291 }, 00:25:59.291 "driver_specific": { 00:25:59.291 "nvme": [ 00:25:59.291 { 00:25:59.291 "pci_address": "0000:00:11.0", 00:25:59.291 "trid": { 00:25:59.291 "trtype": "PCIe", 00:25:59.291 "traddr": "0000:00:11.0" 00:25:59.291 }, 00:25:59.291 "ctrlr_data": { 00:25:59.291 "cntlid": 0, 00:25:59.291 "vendor_id": "0x1b36", 00:25:59.291 "model_number": "QEMU NVMe Ctrl", 00:25:59.291 "serial_number": "12341", 00:25:59.291 "firmware_revision": "8.0.0", 00:25:59.291 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:59.291 "oacs": { 00:25:59.291 "security": 0, 00:25:59.291 "format": 1, 00:25:59.291 "firmware": 0, 00:25:59.291 "ns_manage": 1 00:25:59.291 }, 00:25:59.291 "multi_ctrlr": false, 00:25:59.291 "ana_reporting": false 00:25:59.291 }, 00:25:59.291 "vs": { 00:25:59.291 "nvme_version": "1.4" 00:25:59.291 }, 00:25:59.291 "ns_data": { 00:25:59.291 "id": 1, 00:25:59.291 "can_share": false 00:25:59.291 } 00:25:59.291 } 00:25:59.291 ], 00:25:59.291 "mp_policy": "active_passive" 00:25:59.291 } 00:25:59.291 } 00:25:59.291 ]' 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:59.291 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:59.551 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=61bd11e1-2efb-45b1-a6e8-5b77514778f3 00:25:59.551 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:59.551 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61bd11e1-2efb-45b1-a6e8-5b77514778f3 00:25:59.811 11:02:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0d3d0a42-a2eb-4233-89cc-d67fe8cfaf8b 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0d3d0a42-a2eb-4233-89cc-d67fe8cfaf8b 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:00.073 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.332 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:00.332 { 00:26:00.332 "name": "3d32297b-0cba-421b-b3b5-0c24f3d5def9", 00:26:00.332 "aliases": [ 00:26:00.332 "lvs/nvme0n1p0" 00:26:00.332 ], 00:26:00.332 "product_name": "Logical Volume", 00:26:00.332 "block_size": 4096, 00:26:00.332 "num_blocks": 26476544, 00:26:00.332 "uuid": "3d32297b-0cba-421b-b3b5-0c24f3d5def9", 00:26:00.332 "assigned_rate_limits": { 00:26:00.332 "rw_ios_per_sec": 0, 00:26:00.332 "rw_mbytes_per_sec": 0, 00:26:00.332 "r_mbytes_per_sec": 0, 00:26:00.332 "w_mbytes_per_sec": 0 00:26:00.332 }, 00:26:00.332 "claimed": false, 00:26:00.332 "zoned": false, 00:26:00.332 "supported_io_types": { 00:26:00.332 "read": true, 00:26:00.332 "write": true, 00:26:00.332 "unmap": true, 00:26:00.332 "flush": false, 00:26:00.332 "reset": true, 00:26:00.332 "nvme_admin": false, 00:26:00.332 "nvme_io": false, 00:26:00.332 "nvme_io_md": false, 00:26:00.332 "write_zeroes": true, 00:26:00.332 "zcopy": false, 00:26:00.332 "get_zone_info": false, 00:26:00.332 "zone_management": false, 00:26:00.332 "zone_append": false, 00:26:00.332 "compare": false, 00:26:00.332 "compare_and_write": false, 00:26:00.332 "abort": false, 00:26:00.332 "seek_hole": true, 00:26:00.332 "seek_data": true, 00:26:00.333 "copy": false, 00:26:00.333 "nvme_iov_md": false 00:26:00.333 }, 00:26:00.333 "driver_specific": { 00:26:00.333 "lvol": { 00:26:00.333 "lvol_store_uuid": "0d3d0a42-a2eb-4233-89cc-d67fe8cfaf8b", 00:26:00.333 "base_bdev": "nvme0n1", 00:26:00.333 "thin_provision": true, 00:26:00.333 "num_allocated_clusters": 0, 00:26:00.333 "snapshot": false, 00:26:00.333 "clone": false, 00:26:00.333 "esnap_clone": false 00:26:00.333 } 00:26:00.333 } 00:26:00.333 } 00:26:00.333 ]' 00:26:00.333 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:00.333 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:00.333 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:00.592 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:00.592 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:00.592 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:00.592 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:00.592 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:00.592 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:00.851 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:00.851 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:00.851 11:02:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.851 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.851 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:00.851 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:00.851 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:00.851 11:02:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:00.851 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:00.851 { 00:26:00.851 "name": "3d32297b-0cba-421b-b3b5-0c24f3d5def9", 00:26:00.851 "aliases": [ 00:26:00.851 "lvs/nvme0n1p0" 00:26:00.851 ], 00:26:00.851 "product_name": "Logical Volume", 00:26:00.851 "block_size": 4096, 00:26:00.851 "num_blocks": 26476544, 00:26:00.851 "uuid": "3d32297b-0cba-421b-b3b5-0c24f3d5def9", 00:26:00.851 "assigned_rate_limits": { 00:26:00.851 "rw_ios_per_sec": 0, 00:26:00.851 "rw_mbytes_per_sec": 0, 00:26:00.851 "r_mbytes_per_sec": 0, 00:26:00.851 "w_mbytes_per_sec": 0 00:26:00.851 }, 00:26:00.851 "claimed": false, 00:26:00.851 "zoned": false, 00:26:00.851 "supported_io_types": { 00:26:00.851 "read": true, 00:26:00.851 "write": true, 00:26:00.851 "unmap": true, 00:26:00.851 "flush": false, 00:26:00.851 "reset": true, 00:26:00.851 "nvme_admin": false, 00:26:00.852 "nvme_io": false, 00:26:00.852 "nvme_io_md": false, 00:26:00.852 "write_zeroes": true, 00:26:00.852 "zcopy": false, 00:26:00.852 "get_zone_info": false, 00:26:00.852 "zone_management": false, 00:26:00.852 "zone_append": false, 00:26:00.852 "compare": false, 00:26:00.852 "compare_and_write": false, 00:26:00.852 "abort": false, 00:26:00.852 "seek_hole": true, 00:26:00.852 "seek_data": true, 00:26:00.852 "copy": false, 00:26:00.852 "nvme_iov_md": false 00:26:00.852 }, 00:26:00.852 "driver_specific": { 00:26:00.852 "lvol": { 00:26:00.852 "lvol_store_uuid": "0d3d0a42-a2eb-4233-89cc-d67fe8cfaf8b", 00:26:00.852 "base_bdev": "nvme0n1", 00:26:00.852 "thin_provision": true, 00:26:00.852 "num_allocated_clusters": 0, 00:26:00.852 "snapshot": false, 00:26:00.852 "clone": false, 00:26:00.852 "esnap_clone": false 00:26:00.852 } 00:26:00.852 } 00:26:00.852 } 00:26:00.852 ]' 00:26:00.852 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:01.111 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:01.111 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:01.111 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:01.111 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:01.111 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:01.111 11:02:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:01.111 11:02:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:01.111 11:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d32297b-0cba-421b-b3b5-0c24f3d5def9 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:01.371 { 00:26:01.371 "name": "3d32297b-0cba-421b-b3b5-0c24f3d5def9", 00:26:01.371 "aliases": [ 00:26:01.371 "lvs/nvme0n1p0" 00:26:01.371 ], 00:26:01.371 "product_name": "Logical Volume", 00:26:01.371 "block_size": 4096, 00:26:01.371 "num_blocks": 26476544, 00:26:01.371 "uuid": "3d32297b-0cba-421b-b3b5-0c24f3d5def9", 00:26:01.371 "assigned_rate_limits": { 00:26:01.371 "rw_ios_per_sec": 0, 00:26:01.371 "rw_mbytes_per_sec": 0, 00:26:01.371 "r_mbytes_per_sec": 0, 00:26:01.371 "w_mbytes_per_sec": 0 00:26:01.371 }, 00:26:01.371 "claimed": false, 00:26:01.371 "zoned": false, 00:26:01.371 "supported_io_types": { 00:26:01.371 "read": true, 00:26:01.371 "write": true, 00:26:01.371 "unmap": true, 00:26:01.371 "flush": false, 00:26:01.371 "reset": true, 00:26:01.371 "nvme_admin": false, 00:26:01.371 "nvme_io": false, 00:26:01.371 "nvme_io_md": false, 00:26:01.371 "write_zeroes": true, 00:26:01.371 "zcopy": false, 00:26:01.371 "get_zone_info": false, 00:26:01.371 "zone_management": false, 00:26:01.371 "zone_append": false, 00:26:01.371 "compare": false, 00:26:01.371 "compare_and_write": false, 00:26:01.371 "abort": false, 00:26:01.371 "seek_hole": true, 00:26:01.371 "seek_data": true, 00:26:01.371 "copy": false, 00:26:01.371 "nvme_iov_md": false 00:26:01.371 }, 00:26:01.371 "driver_specific": { 00:26:01.371 "lvol": { 00:26:01.371 "lvol_store_uuid": "0d3d0a42-a2eb-4233-89cc-d67fe8cfaf8b", 00:26:01.371 "base_bdev": "nvme0n1", 00:26:01.371 "thin_provision": true, 00:26:01.371 "num_allocated_clusters": 0, 00:26:01.371 "snapshot": false, 00:26:01.371 "clone": false, 00:26:01.371 "esnap_clone": false 00:26:01.371 } 00:26:01.371 } 00:26:01.371 } 00:26:01.371 ]' 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:01.371 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3d32297b-0cba-421b-b3b5-0c24f3d5def9 --l2p_dram_limit 10' 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:01.631 11:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3d32297b-0cba-421b-b3b5-0c24f3d5def9 --l2p_dram_limit 10 -c nvc0n1p0 00:26:01.631 [2024-11-20 11:02:50.851154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.631 [2024-11-20 11:02:50.851208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:01.632 [2024-11-20 11:02:50.851243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:01.632 [2024-11-20 11:02:50.851254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.851321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.851333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:01.632 [2024-11-20 11:02:50.851346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:01.632 [2024-11-20 11:02:50.851356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.851386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:01.632 [2024-11-20 11:02:50.852411] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:01.632 [2024-11-20 11:02:50.852448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.852460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:01.632 [2024-11-20 11:02:50.852474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:26:01.632 [2024-11-20 11:02:50.852484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.852624] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 690cb0a8-9167-4dbe-8ee2-0c9a3b18b8b9 00:26:01.632 [2024-11-20 11:02:50.854050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.854080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:01.632 [2024-11-20 11:02:50.854093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:01.632 [2024-11-20 11:02:50.854108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.861551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.861735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:01.632 [2024-11-20 11:02:50.861761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.413 ms 00:26:01.632 [2024-11-20 11:02:50.861775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.861882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.861898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:01.632 [2024-11-20 11:02:50.861909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:01.632 [2024-11-20 11:02:50.861926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.861991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.862007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:01.632 [2024-11-20 11:02:50.862018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:01.632 [2024-11-20 11:02:50.862034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.862060] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:01.632 [2024-11-20 11:02:50.866832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.866867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:01.632 [2024-11-20 11:02:50.866883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.784 ms 00:26:01.632 [2024-11-20 11:02:50.866893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.866930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.866940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:01.632 [2024-11-20 11:02:50.866953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:01.632 [2024-11-20 11:02:50.866963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.866999] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:01.632 [2024-11-20 11:02:50.867120] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:01.632 [2024-11-20 11:02:50.867140] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:01.632 [2024-11-20 11:02:50.867153] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:01.632 [2024-11-20 11:02:50.867168] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867180] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867194] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:01.632 [2024-11-20 11:02:50.867203] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:01.632 [2024-11-20 11:02:50.867218] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:01.632 [2024-11-20 11:02:50.867228] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:01.632 [2024-11-20 11:02:50.867241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.867251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:01.632 [2024-11-20 11:02:50.867264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:26:01.632 [2024-11-20 11:02:50.867283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.867358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.867368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:01.632 [2024-11-20 11:02:50.867381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:01.632 [2024-11-20 11:02:50.867391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.867486] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:01.632 [2024-11-20 11:02:50.867499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:01.632 [2024-11-20 11:02:50.867512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:01.632 [2024-11-20 11:02:50.867544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:01.632 [2024-11-20 11:02:50.867577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:01.632 [2024-11-20 11:02:50.867621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:01.632 [2024-11-20 11:02:50.867648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:01.632 [2024-11-20 11:02:50.867660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:01.632 [2024-11-20 11:02:50.867669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:01.632 [2024-11-20 11:02:50.867682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:01.632 [2024-11-20 11:02:50.867692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:01.632 [2024-11-20 11:02:50.867726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:01.632 [2024-11-20 11:02:50.867760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:01.632 [2024-11-20 11:02:50.867806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:01.632 [2024-11-20 11:02:50.867839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:01.632 [2024-11-20 11:02:50.867869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:01.632 [2024-11-20 11:02:50.867889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:01.632 [2024-11-20 11:02:50.867907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:01.632 [2024-11-20 11:02:50.867927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:01.632 [2024-11-20 11:02:50.867936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:01.632 [2024-11-20 11:02:50.867948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:01.632 [2024-11-20 11:02:50.867957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:01.632 [2024-11-20 11:02:50.867969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:01.632 [2024-11-20 11:02:50.867978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.632 [2024-11-20 11:02:50.867990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:01.632 [2024-11-20 11:02:50.867999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:01.632 [2024-11-20 11:02:50.868010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.632 [2024-11-20 11:02:50.868019] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:01.632 [2024-11-20 11:02:50.868032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:01.632 [2024-11-20 11:02:50.868042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:01.632 [2024-11-20 11:02:50.868056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:01.632 [2024-11-20 11:02:50.868067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:01.632 [2024-11-20 11:02:50.868081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:01.632 [2024-11-20 11:02:50.868090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:01.632 [2024-11-20 11:02:50.868102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:01.632 [2024-11-20 11:02:50.868111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:01.632 [2024-11-20 11:02:50.868123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:01.632 [2024-11-20 11:02:50.868137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:01.632 [2024-11-20 11:02:50.868152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.632 [2024-11-20 11:02:50.868166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:01.632 [2024-11-20 11:02:50.868179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:01.632 [2024-11-20 11:02:50.868189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:01.632 [2024-11-20 11:02:50.868202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:01.632 [2024-11-20 11:02:50.868212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:01.632 [2024-11-20 11:02:50.868225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:01.632 [2024-11-20 11:02:50.868235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:01.632 [2024-11-20 11:02:50.868248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:01.632 [2024-11-20 11:02:50.868258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:01.632 [2024-11-20 11:02:50.868274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:01.632 [2024-11-20 11:02:50.868284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:01.632 [2024-11-20 11:02:50.868296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:01.632 [2024-11-20 11:02:50.868307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:01.632 [2024-11-20 11:02:50.868321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:01.632 [2024-11-20 11:02:50.868331] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:01.632 [2024-11-20 11:02:50.868345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:01.632 [2024-11-20 11:02:50.868357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:01.632 [2024-11-20 11:02:50.868370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:01.632 [2024-11-20 11:02:50.868381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:01.632 [2024-11-20 11:02:50.868394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:01.632 [2024-11-20 11:02:50.868405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.632 [2024-11-20 11:02:50.868417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:01.632 [2024-11-20 11:02:50.868427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:26:01.632 [2024-11-20 11:02:50.868444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.632 [2024-11-20 11:02:50.868486] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:01.632 [2024-11-20 11:02:50.868504] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:06.903 [2024-11-20 11:02:55.878754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.878819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:06.903 [2024-11-20 11:02:55.878837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5018.401 ms 00:26:06.903 [2024-11-20 11:02:55.878850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.903 [2024-11-20 11:02:55.916310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.916361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:06.903 [2024-11-20 11:02:55.916377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.266 ms 00:26:06.903 [2024-11-20 11:02:55.916389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.903 [2024-11-20 11:02:55.916512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.916527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:06.903 [2024-11-20 11:02:55.916539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:26:06.903 [2024-11-20 11:02:55.916553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.903 [2024-11-20 11:02:55.960249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.960297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:06.903 [2024-11-20 11:02:55.960311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.694 ms 00:26:06.903 [2024-11-20 11:02:55.960323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.903 [2024-11-20 11:02:55.960356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.960374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:06.903 [2024-11-20 11:02:55.960384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:06.903 [2024-11-20 11:02:55.960396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.903 [2024-11-20 11:02:55.960910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.960931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:06.903 [2024-11-20 11:02:55.960942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:26:06.903 [2024-11-20 11:02:55.960954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.903 [2024-11-20 11:02:55.961051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.961065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:06.903 [2024-11-20 11:02:55.961078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:06.903 [2024-11-20 11:02:55.961095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.903 [2024-11-20 11:02:55.980952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.980994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:06.903 [2024-11-20 11:02:55.981008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.870 ms 00:26:06.903 [2024-11-20 11:02:55.981020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.903 [2024-11-20 11:02:55.993163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:06.903 [2024-11-20 11:02:55.996393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.903 [2024-11-20 11:02:55.996423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:06.903 [2024-11-20 11:02:55.996438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.317 ms 00:26:06.903 [2024-11-20 11:02:55.996448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.161 [2024-11-20 11:02:56.168920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.161 [2024-11-20 11:02:56.168975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:07.161 [2024-11-20 11:02:56.168997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 172.715 ms 00:26:07.161 [2024-11-20 11:02:56.169008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.161 [2024-11-20 11:02:56.169194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.161 [2024-11-20 11:02:56.169211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:07.161 [2024-11-20 11:02:56.169229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:26:07.161 [2024-11-20 11:02:56.169239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.161 [2024-11-20 11:02:56.206140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.161 [2024-11-20 11:02:56.206181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:07.161 [2024-11-20 11:02:56.206201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.904 ms 00:26:07.161 [2024-11-20 11:02:56.206228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.161 [2024-11-20 11:02:56.242074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.161 [2024-11-20 11:02:56.242113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:07.161 [2024-11-20 11:02:56.242131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.850 ms 00:26:07.161 [2024-11-20 11:02:56.242141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.161 [2024-11-20 11:02:56.242949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.161 [2024-11-20 11:02:56.242973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:07.161 [2024-11-20 11:02:56.242988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:26:07.161 [2024-11-20 11:02:56.242999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.161 [2024-11-20 11:02:56.343372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.161 [2024-11-20 11:02:56.343422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:07.161 [2024-11-20 11:02:56.343445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.474 ms 00:26:07.161 [2024-11-20 11:02:56.343457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.161 [2024-11-20 11:02:56.379923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.161 [2024-11-20 11:02:56.379966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:07.161 [2024-11-20 11:02:56.379984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.433 ms 00:26:07.161 [2024-11-20 11:02:56.379994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.420 [2024-11-20 11:02:56.413977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.420 [2024-11-20 11:02:56.414013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:07.420 [2024-11-20 11:02:56.414030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.995 ms 00:26:07.420 [2024-11-20 11:02:56.414055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.420 [2024-11-20 11:02:56.448807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.420 [2024-11-20 11:02:56.448972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:07.420 [2024-11-20 11:02:56.449015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.762 ms 00:26:07.420 [2024-11-20 11:02:56.449026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.420 [2024-11-20 11:02:56.449073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.420 [2024-11-20 11:02:56.449085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:07.420 [2024-11-20 11:02:56.449101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:07.420 [2024-11-20 11:02:56.449112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.420 [2024-11-20 11:02:56.449213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.420 [2024-11-20 11:02:56.449226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:07.420 [2024-11-20 11:02:56.449243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:07.420 [2024-11-20 11:02:56.449253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.420 [2024-11-20 11:02:56.450241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5607.753 ms, result 0 00:26:07.420 { 00:26:07.420 "name": "ftl0", 00:26:07.420 "uuid": "690cb0a8-9167-4dbe-8ee2-0c9a3b18b8b9" 00:26:07.420 } 00:26:07.420 11:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:07.420 11:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:07.679 /dev/nbd0 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:07.679 1+0 records in 00:26:07.679 1+0 records out 00:26:07.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306299 s, 13.4 MB/s 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:07.679 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:07.939 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:07.939 11:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:07.939 11:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:07.939 [2024-11-20 11:02:57.017953] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:26:07.939 [2024-11-20 11:02:57.018082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81060 ] 00:26:08.198 [2024-11-20 11:02:57.194933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.198 [2024-11-20 11:02:57.302353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:09.576  [2024-11-20T11:02:59.764Z] Copying: 211/1024 [MB] (211 MBps) [2024-11-20T11:03:00.700Z] Copying: 418/1024 [MB] (207 MBps) [2024-11-20T11:03:01.636Z] Copying: 626/1024 [MB] (207 MBps) [2024-11-20T11:03:03.014Z] Copying: 832/1024 [MB] (206 MBps) [2024-11-20T11:03:03.951Z] Copying: 1024/1024 [MB] (average 205 MBps) 00:26:14.698 00:26:14.698 11:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:16.603 11:03:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:16.603 [2024-11-20 11:03:05.456148] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:26:16.603 [2024-11-20 11:03:05.456475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81147 ] 00:26:16.603 [2024-11-20 11:03:05.636448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.603 [2024-11-20 11:03:05.744304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.984  [2024-11-20T11:03:08.233Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-20T11:03:09.168Z] Copying: 32/1024 [MB] (16 MBps) [2024-11-20T11:03:10.105Z] Copying: 49/1024 [MB] (16 MBps) [2024-11-20T11:03:11.483Z] Copying: 67/1024 [MB] (17 MBps) [2024-11-20T11:03:12.421Z] Copying: 84/1024 [MB] (17 MBps) [2024-11-20T11:03:13.358Z] Copying: 102/1024 [MB] (17 MBps) [2024-11-20T11:03:14.294Z] Copying: 119/1024 [MB] (17 MBps) [2024-11-20T11:03:15.231Z] Copying: 137/1024 [MB] (17 MBps) [2024-11-20T11:03:16.167Z] Copying: 154/1024 [MB] (17 MBps) [2024-11-20T11:03:17.105Z] Copying: 171/1024 [MB] (17 MBps) [2024-11-20T11:03:18.482Z] Copying: 189/1024 [MB] (17 MBps) [2024-11-20T11:03:19.423Z] Copying: 206/1024 [MB] (17 MBps) [2024-11-20T11:03:20.358Z] Copying: 224/1024 [MB] (17 MBps) [2024-11-20T11:03:21.294Z] Copying: 241/1024 [MB] (17 MBps) [2024-11-20T11:03:22.230Z] Copying: 259/1024 [MB] (17 MBps) [2024-11-20T11:03:23.336Z] Copying: 277/1024 [MB] (17 MBps) [2024-11-20T11:03:24.270Z] Copying: 294/1024 [MB] (17 MBps) [2024-11-20T11:03:25.206Z] Copying: 313/1024 [MB] (18 MBps) [2024-11-20T11:03:26.143Z] Copying: 330/1024 [MB] (17 MBps) [2024-11-20T11:03:27.079Z] Copying: 348/1024 [MB] (17 MBps) [2024-11-20T11:03:28.456Z] Copying: 366/1024 [MB] (17 MBps) [2024-11-20T11:03:29.391Z] Copying: 383/1024 [MB] (17 MBps) [2024-11-20T11:03:30.326Z] Copying: 401/1024 [MB] (17 MBps) [2024-11-20T11:03:31.262Z] Copying: 419/1024 [MB] (17 MBps) [2024-11-20T11:03:32.198Z] Copying: 436/1024 [MB] (17 MBps) [2024-11-20T11:03:33.133Z] Copying: 454/1024 [MB] (17 MBps) [2024-11-20T11:03:34.069Z] Copying: 471/1024 [MB] (17 MBps) [2024-11-20T11:03:35.445Z] Copying: 489/1024 [MB] (17 MBps) [2024-11-20T11:03:36.381Z] Copying: 507/1024 [MB] (17 MBps) [2024-11-20T11:03:37.315Z] Copying: 524/1024 [MB] (17 MBps) [2024-11-20T11:03:38.253Z] Copying: 542/1024 [MB] (17 MBps) [2024-11-20T11:03:39.189Z] Copying: 560/1024 [MB] (17 MBps) [2024-11-20T11:03:40.175Z] Copying: 577/1024 [MB] (17 MBps) [2024-11-20T11:03:41.133Z] Copying: 595/1024 [MB] (17 MBps) [2024-11-20T11:03:42.071Z] Copying: 612/1024 [MB] (17 MBps) [2024-11-20T11:03:43.449Z] Copying: 630/1024 [MB] (17 MBps) [2024-11-20T11:03:44.018Z] Copying: 647/1024 [MB] (17 MBps) [2024-11-20T11:03:45.395Z] Copying: 665/1024 [MB] (17 MBps) [2024-11-20T11:03:46.332Z] Copying: 682/1024 [MB] (17 MBps) [2024-11-20T11:03:47.269Z] Copying: 700/1024 [MB] (17 MBps) [2024-11-20T11:03:48.205Z] Copying: 718/1024 [MB] (17 MBps) [2024-11-20T11:03:49.142Z] Copying: 735/1024 [MB] (17 MBps) [2024-11-20T11:03:50.085Z] Copying: 753/1024 [MB] (17 MBps) [2024-11-20T11:03:51.018Z] Copying: 770/1024 [MB] (17 MBps) [2024-11-20T11:03:52.001Z] Copying: 788/1024 [MB] (17 MBps) [2024-11-20T11:03:53.378Z] Copying: 806/1024 [MB] (17 MBps) [2024-11-20T11:03:54.314Z] Copying: 823/1024 [MB] (17 MBps) [2024-11-20T11:03:55.250Z] Copying: 841/1024 [MB] (17 MBps) [2024-11-20T11:03:56.187Z] Copying: 858/1024 [MB] (17 MBps) [2024-11-20T11:03:57.122Z] Copying: 876/1024 [MB] (17 MBps) [2024-11-20T11:03:58.057Z] Copying: 893/1024 [MB] (17 MBps) [2024-11-20T11:03:59.023Z] Copying: 911/1024 [MB] (17 MBps) [2024-11-20T11:03:59.992Z] Copying: 929/1024 [MB] (17 MBps) [2024-11-20T11:04:01.371Z] Copying: 947/1024 [MB] (17 MBps) [2024-11-20T11:04:02.306Z] Copying: 964/1024 [MB] (17 MBps) [2024-11-20T11:04:03.243Z] Copying: 982/1024 [MB] (17 MBps) [2024-11-20T11:04:04.179Z] Copying: 1000/1024 [MB] (17 MBps) [2024-11-20T11:04:04.438Z] Copying: 1018/1024 [MB] (17 MBps) [2024-11-20T11:04:05.820Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:27:16.567 00:27:16.567 11:04:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:16.567 11:04:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:16.567 11:04:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:16.567 [2024-11-20 11:04:05.806872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.567 [2024-11-20 11:04:05.806928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:16.567 [2024-11-20 11:04:05.806945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:16.567 [2024-11-20 11:04:05.806959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.567 [2024-11-20 11:04:05.806984] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:16.567 [2024-11-20 11:04:05.811192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.567 [2024-11-20 11:04:05.811225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:16.567 [2024-11-20 11:04:05.811241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.189 ms 00:27:16.567 [2024-11-20 11:04:05.811251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.567 [2024-11-20 11:04:05.813246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.567 [2024-11-20 11:04:05.813285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:16.567 [2024-11-20 11:04:05.813306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.955 ms 00:27:16.567 [2024-11-20 11:04:05.813317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:05.831293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:05.831479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:16.828 [2024-11-20 11:04:05.831513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.979 ms 00:27:16.828 [2024-11-20 11:04:05.831525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:05.836430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:05.836463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:16.828 [2024-11-20 11:04:05.836478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:27:16.828 [2024-11-20 11:04:05.836487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:05.870654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:05.870691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:16.828 [2024-11-20 11:04:05.870708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.148 ms 00:27:16.828 [2024-11-20 11:04:05.870717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:05.891543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:05.891581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:16.828 [2024-11-20 11:04:05.891626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.814 ms 00:27:16.828 [2024-11-20 11:04:05.891640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:05.891797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:05.891812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:16.828 [2024-11-20 11:04:05.891826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:27:16.828 [2024-11-20 11:04:05.891836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:05.926430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:05.926624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:16.828 [2024-11-20 11:04:05.926650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.627 ms 00:27:16.828 [2024-11-20 11:04:05.926660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:05.961381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:05.961535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:16.828 [2024-11-20 11:04:05.961577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.734 ms 00:27:16.828 [2024-11-20 11:04:05.961587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:05.994960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:05.994995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:16.828 [2024-11-20 11:04:05.995010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.367 ms 00:27:16.828 [2024-11-20 11:04:05.995020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:06.030549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.828 [2024-11-20 11:04:06.030585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:16.828 [2024-11-20 11:04:06.030611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.466 ms 00:27:16.828 [2024-11-20 11:04:06.030621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.828 [2024-11-20 11:04:06.030678] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:16.828 [2024-11-20 11:04:06.030695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.030990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:16.828 [2024-11-20 11:04:06.031246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:16.829 [2024-11-20 11:04:06.031936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:16.829 [2024-11-20 11:04:06.031949] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 690cb0a8-9167-4dbe-8ee2-0c9a3b18b8b9 00:27:16.829 [2024-11-20 11:04:06.031959] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:16.829 [2024-11-20 11:04:06.031974] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:16.829 [2024-11-20 11:04:06.031983] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:16.829 [2024-11-20 11:04:06.031999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:16.829 [2024-11-20 11:04:06.032008] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:16.829 [2024-11-20 11:04:06.032021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:16.829 [2024-11-20 11:04:06.032031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:16.829 [2024-11-20 11:04:06.032043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:16.829 [2024-11-20 11:04:06.032051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:16.829 [2024-11-20 11:04:06.032063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.829 [2024-11-20 11:04:06.032073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:16.829 [2024-11-20 11:04:06.032087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:27:16.829 [2024-11-20 11:04:06.032096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.829 [2024-11-20 11:04:06.050950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.829 [2024-11-20 11:04:06.050983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:16.829 [2024-11-20 11:04:06.051005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.830 ms 00:27:16.829 [2024-11-20 11:04:06.051015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.829 [2024-11-20 11:04:06.051478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:16.829 [2024-11-20 11:04:06.051489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:16.829 [2024-11-20 11:04:06.051501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:27:16.829 [2024-11-20 11:04:06.051510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.115553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.115613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:17.089 [2024-11-20 11:04:06.115630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.115641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.115714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.115725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:17.089 [2024-11-20 11:04:06.115739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.115748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.115830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.115843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:17.089 [2024-11-20 11:04:06.115860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.115869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.115895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.115906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:17.089 [2024-11-20 11:04:06.115918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.115928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.234826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.234881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:17.089 [2024-11-20 11:04:06.234899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.234909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.331247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.331293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:17.089 [2024-11-20 11:04:06.331310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.331335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.331445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.331458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:17.089 [2024-11-20 11:04:06.331471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.331485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.331539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.331551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:17.089 [2024-11-20 11:04:06.331564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.331573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.331851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.331898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:17.089 [2024-11-20 11:04:06.331934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.331964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.332046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.332430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:17.089 [2024-11-20 11:04:06.332451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.332461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.332510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.332522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:17.089 [2024-11-20 11:04:06.332534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.332544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.332609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:17.089 [2024-11-20 11:04:06.332621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:17.089 [2024-11-20 11:04:06.332635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:17.089 [2024-11-20 11:04:06.332645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.089 [2024-11-20 11:04:06.332779] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.724 ms, result 0 00:27:17.089 true 00:27:17.349 11:04:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80900 00:27:17.349 11:04:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80900 00:27:17.349 11:04:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:17.349 [2024-11-20 11:04:06.455178] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:27:17.349 [2024-11-20 11:04:06.455305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81770 ] 00:27:17.608 [2024-11-20 11:04:06.635702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.608 [2024-11-20 11:04:06.737034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.983  [2024-11-20T11:04:09.178Z] Copying: 214/1024 [MB] (214 MBps) [2024-11-20T11:04:10.114Z] Copying: 428/1024 [MB] (213 MBps) [2024-11-20T11:04:11.050Z] Copying: 645/1024 [MB] (216 MBps) [2024-11-20T11:04:11.985Z] Copying: 856/1024 [MB] (211 MBps) [2024-11-20T11:04:12.920Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:27:23.667 00:27:23.667 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80900 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:23.667 11:04:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:23.926 [2024-11-20 11:04:12.995647] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:27:23.926 [2024-11-20 11:04:12.995924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81834 ] 00:27:23.926 [2024-11-20 11:04:13.174752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.184 [2024-11-20 11:04:13.278329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.442 [2024-11-20 11:04:13.608717] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:24.442 [2024-11-20 11:04:13.608781] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:24.442 [2024-11-20 11:04:13.674325] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:24.442 [2024-11-20 11:04:13.674706] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:24.442 [2024-11-20 11:04:13.675003] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:25.010 [2024-11-20 11:04:13.984338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:13.984536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:25.010 [2024-11-20 11:04:13.984560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:25.010 [2024-11-20 11:04:13.984571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:13.984652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:13.984666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.010 [2024-11-20 11:04:13.984677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:25.010 [2024-11-20 11:04:13.984687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:13.984709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:25.010 [2024-11-20 11:04:13.985664] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:25.010 [2024-11-20 11:04:13.985692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:13.985703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.010 [2024-11-20 11:04:13.985714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:27:25.010 [2024-11-20 11:04:13.985723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:13.987250] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:25.010 [2024-11-20 11:04:14.005351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:14.005394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:25.010 [2024-11-20 11:04:14.005407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.130 ms 00:27:25.010 [2024-11-20 11:04:14.005435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:14.005495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:14.005508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:25.010 [2024-11-20 11:04:14.005518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:25.010 [2024-11-20 11:04:14.005528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:14.012191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:14.012218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.010 [2024-11-20 11:04:14.012230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.587 ms 00:27:25.010 [2024-11-20 11:04:14.012240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:14.012315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:14.012328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.010 [2024-11-20 11:04:14.012338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:25.010 [2024-11-20 11:04:14.012348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:14.012385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:14.012401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:25.010 [2024-11-20 11:04:14.012411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:25.010 [2024-11-20 11:04:14.012421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:14.012444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:25.010 [2024-11-20 11:04:14.017042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:14.017070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.010 [2024-11-20 11:04:14.017082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.610 ms 00:27:25.010 [2024-11-20 11:04:14.017091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.010 [2024-11-20 11:04:14.017119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.010 [2024-11-20 11:04:14.017129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:25.010 [2024-11-20 11:04:14.017143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:25.011 [2024-11-20 11:04:14.017153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.011 [2024-11-20 11:04:14.017201] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:25.011 [2024-11-20 11:04:14.017225] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:25.011 [2024-11-20 11:04:14.017258] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:25.011 [2024-11-20 11:04:14.017274] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:25.011 [2024-11-20 11:04:14.017355] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:25.011 [2024-11-20 11:04:14.017368] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:25.011 [2024-11-20 11:04:14.017380] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:25.011 [2024-11-20 11:04:14.017392] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:25.011 [2024-11-20 11:04:14.017407] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:25.011 [2024-11-20 11:04:14.017418] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:25.011 [2024-11-20 11:04:14.017427] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:25.011 [2024-11-20 11:04:14.017436] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:25.011 [2024-11-20 11:04:14.017445] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:25.011 [2024-11-20 11:04:14.017455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.011 [2024-11-20 11:04:14.017465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:25.011 [2024-11-20 11:04:14.017474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:27:25.011 [2024-11-20 11:04:14.017483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.011 [2024-11-20 11:04:14.017548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.011 [2024-11-20 11:04:14.017561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:25.011 [2024-11-20 11:04:14.017571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:25.011 [2024-11-20 11:04:14.017580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.011 [2024-11-20 11:04:14.017701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:25.011 [2024-11-20 11:04:14.017716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:25.011 [2024-11-20 11:04:14.017728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:25.011 [2024-11-20 11:04:14.017738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:25.011 [2024-11-20 11:04:14.017757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:25.011 [2024-11-20 11:04:14.017776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:25.011 [2024-11-20 11:04:14.017786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:25.011 [2024-11-20 11:04:14.017804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:25.011 [2024-11-20 11:04:14.017822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:25.011 [2024-11-20 11:04:14.017831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:25.011 [2024-11-20 11:04:14.017840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:25.011 [2024-11-20 11:04:14.017850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:25.011 [2024-11-20 11:04:14.017858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:25.011 [2024-11-20 11:04:14.017877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:25.011 [2024-11-20 11:04:14.017886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:25.011 [2024-11-20 11:04:14.017904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:25.011 [2024-11-20 11:04:14.017921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:25.011 [2024-11-20 11:04:14.017930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:25.011 [2024-11-20 11:04:14.017948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:25.011 [2024-11-20 11:04:14.017957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:25.011 [2024-11-20 11:04:14.017974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:25.011 [2024-11-20 11:04:14.017983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:25.011 [2024-11-20 11:04:14.017992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:25.011 [2024-11-20 11:04:14.018001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:25.011 [2024-11-20 11:04:14.018010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:25.011 [2024-11-20 11:04:14.018019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:25.011 [2024-11-20 11:04:14.018028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:25.011 [2024-11-20 11:04:14.018036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:25.011 [2024-11-20 11:04:14.018045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:25.011 [2024-11-20 11:04:14.018053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:25.011 [2024-11-20 11:04:14.018063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:25.011 [2024-11-20 11:04:14.018071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:25.011 [2024-11-20 11:04:14.018080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:25.011 [2024-11-20 11:04:14.018105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:25.011 [2024-11-20 11:04:14.018114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:25.011 [2024-11-20 11:04:14.018123] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:25.011 [2024-11-20 11:04:14.018133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:25.011 [2024-11-20 11:04:14.018142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:25.011 [2024-11-20 11:04:14.018155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:25.011 [2024-11-20 11:04:14.018165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:25.011 [2024-11-20 11:04:14.018174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:25.011 [2024-11-20 11:04:14.018183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:25.011 [2024-11-20 11:04:14.018193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:25.011 [2024-11-20 11:04:14.018202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:25.011 [2024-11-20 11:04:14.018211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:25.011 [2024-11-20 11:04:14.018221] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:25.011 [2024-11-20 11:04:14.018233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:25.011 [2024-11-20 11:04:14.018245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:25.011 [2024-11-20 11:04:14.018256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:25.011 [2024-11-20 11:04:14.018266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:25.011 [2024-11-20 11:04:14.018276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:25.011 [2024-11-20 11:04:14.018287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:25.012 [2024-11-20 11:04:14.018297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:25.012 [2024-11-20 11:04:14.018308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:25.012 [2024-11-20 11:04:14.018318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:25.012 [2024-11-20 11:04:14.018329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:25.012 [2024-11-20 11:04:14.018340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:25.012 [2024-11-20 11:04:14.018350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:25.012 [2024-11-20 11:04:14.018360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:25.012 [2024-11-20 11:04:14.018370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:25.012 [2024-11-20 11:04:14.018380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:25.012 [2024-11-20 11:04:14.018390] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:25.012 [2024-11-20 11:04:14.018401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:25.012 [2024-11-20 11:04:14.018412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:25.012 [2024-11-20 11:04:14.018422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:25.012 [2024-11-20 11:04:14.018432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:25.012 [2024-11-20 11:04:14.018442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:25.012 [2024-11-20 11:04:14.018453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.018463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:25.012 [2024-11-20 11:04:14.018473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:27:25.012 [2024-11-20 11:04:14.018482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.057229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.057272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.012 [2024-11-20 11:04:14.057287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.756 ms 00:27:25.012 [2024-11-20 11:04:14.057297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.057384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.057399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:25.012 [2024-11-20 11:04:14.057410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:25.012 [2024-11-20 11:04:14.057420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.113471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.113520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.012 [2024-11-20 11:04:14.113536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.076 ms 00:27:25.012 [2024-11-20 11:04:14.113550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.113618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.113631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.012 [2024-11-20 11:04:14.113643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:25.012 [2024-11-20 11:04:14.113653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.114147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.114168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.012 [2024-11-20 11:04:14.114179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:27:25.012 [2024-11-20 11:04:14.114190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.114314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.114337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.012 [2024-11-20 11:04:14.114348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:27:25.012 [2024-11-20 11:04:14.114359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.133893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.133927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.012 [2024-11-20 11:04:14.133940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.544 ms 00:27:25.012 [2024-11-20 11:04:14.133951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.152817] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:25.012 [2024-11-20 11:04:14.152856] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:25.012 [2024-11-20 11:04:14.152871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.152882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:25.012 [2024-11-20 11:04:14.152894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.834 ms 00:27:25.012 [2024-11-20 11:04:14.152904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.182345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.182390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:25.012 [2024-11-20 11:04:14.182417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.443 ms 00:27:25.012 [2024-11-20 11:04:14.182428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.200829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.200864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:25.012 [2024-11-20 11:04:14.200876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.375 ms 00:27:25.012 [2024-11-20 11:04:14.200886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.219011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.219175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:25.012 [2024-11-20 11:04:14.219196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.116 ms 00:27:25.012 [2024-11-20 11:04:14.219206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.012 [2024-11-20 11:04:14.219988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.012 [2024-11-20 11:04:14.220019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:25.012 [2024-11-20 11:04:14.220032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:27:25.012 [2024-11-20 11:04:14.220043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.303836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.271 [2024-11-20 11:04:14.303917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:25.271 [2024-11-20 11:04:14.303934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.905 ms 00:27:25.271 [2024-11-20 11:04:14.303945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.316101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:25.271 [2024-11-20 11:04:14.319290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.271 [2024-11-20 11:04:14.319329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:25.271 [2024-11-20 11:04:14.319344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.287 ms 00:27:25.271 [2024-11-20 11:04:14.319354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.319462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.271 [2024-11-20 11:04:14.319475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:25.271 [2024-11-20 11:04:14.319486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:25.271 [2024-11-20 11:04:14.319496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.319589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.271 [2024-11-20 11:04:14.319620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:25.271 [2024-11-20 11:04:14.319630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:25.271 [2024-11-20 11:04:14.319640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.319664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.271 [2024-11-20 11:04:14.319679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:25.271 [2024-11-20 11:04:14.319690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:25.271 [2024-11-20 11:04:14.319700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.319734] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:25.271 [2024-11-20 11:04:14.319748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.271 [2024-11-20 11:04:14.319758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:25.271 [2024-11-20 11:04:14.319769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:25.271 [2024-11-20 11:04:14.319779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.356673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.271 [2024-11-20 11:04:14.356711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:25.271 [2024-11-20 11:04:14.356726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.930 ms 00:27:25.271 [2024-11-20 11:04:14.356737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.356851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.271 [2024-11-20 11:04:14.356865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:25.271 [2024-11-20 11:04:14.356877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:25.271 [2024-11-20 11:04:14.356887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.271 [2024-11-20 11:04:14.357963] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.758 ms, result 0 00:27:26.207  [2024-11-20T11:04:16.397Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T11:04:17.775Z] Copying: 50/1024 [MB] (25 MBps) [2024-11-20T11:04:18.713Z] Copying: 75/1024 [MB] (24 MBps) [2024-11-20T11:04:19.651Z] Copying: 99/1024 [MB] (24 MBps) [2024-11-20T11:04:20.615Z] Copying: 124/1024 [MB] (25 MBps) [2024-11-20T11:04:21.555Z] Copying: 150/1024 [MB] (25 MBps) [2024-11-20T11:04:22.490Z] Copying: 176/1024 [MB] (26 MBps) [2024-11-20T11:04:23.427Z] Copying: 202/1024 [MB] (25 MBps) [2024-11-20T11:04:24.366Z] Copying: 227/1024 [MB] (25 MBps) [2024-11-20T11:04:25.743Z] Copying: 252/1024 [MB] (25 MBps) [2024-11-20T11:04:26.680Z] Copying: 277/1024 [MB] (25 MBps) [2024-11-20T11:04:27.617Z] Copying: 302/1024 [MB] (24 MBps) [2024-11-20T11:04:28.555Z] Copying: 327/1024 [MB] (24 MBps) [2024-11-20T11:04:29.492Z] Copying: 351/1024 [MB] (24 MBps) [2024-11-20T11:04:30.430Z] Copying: 377/1024 [MB] (25 MBps) [2024-11-20T11:04:31.366Z] Copying: 402/1024 [MB] (25 MBps) [2024-11-20T11:04:32.742Z] Copying: 427/1024 [MB] (25 MBps) [2024-11-20T11:04:33.679Z] Copying: 452/1024 [MB] (25 MBps) [2024-11-20T11:04:34.614Z] Copying: 478/1024 [MB] (25 MBps) [2024-11-20T11:04:35.547Z] Copying: 503/1024 [MB] (25 MBps) [2024-11-20T11:04:36.481Z] Copying: 529/1024 [MB] (25 MBps) [2024-11-20T11:04:37.416Z] Copying: 554/1024 [MB] (25 MBps) [2024-11-20T11:04:38.351Z] Copying: 577/1024 [MB] (22 MBps) [2024-11-20T11:04:39.738Z] Copying: 602/1024 [MB] (25 MBps) [2024-11-20T11:04:40.672Z] Copying: 627/1024 [MB] (25 MBps) [2024-11-20T11:04:41.606Z] Copying: 653/1024 [MB] (25 MBps) [2024-11-20T11:04:42.538Z] Copying: 678/1024 [MB] (25 MBps) [2024-11-20T11:04:43.541Z] Copying: 704/1024 [MB] (25 MBps) [2024-11-20T11:04:44.505Z] Copying: 730/1024 [MB] (26 MBps) [2024-11-20T11:04:45.438Z] Copying: 756/1024 [MB] (25 MBps) [2024-11-20T11:04:46.373Z] Copying: 782/1024 [MB] (25 MBps) [2024-11-20T11:04:47.749Z] Copying: 807/1024 [MB] (25 MBps) [2024-11-20T11:04:48.316Z] Copying: 833/1024 [MB] (25 MBps) [2024-11-20T11:04:49.690Z] Copying: 859/1024 [MB] (25 MBps) [2024-11-20T11:04:50.639Z] Copying: 884/1024 [MB] (25 MBps) [2024-11-20T11:04:51.574Z] Copying: 909/1024 [MB] (25 MBps) [2024-11-20T11:04:52.509Z] Copying: 935/1024 [MB] (25 MBps) [2024-11-20T11:04:53.444Z] Copying: 960/1024 [MB] (25 MBps) [2024-11-20T11:04:54.380Z] Copying: 986/1024 [MB] (25 MBps) [2024-11-20T11:04:55.316Z] Copying: 1011/1024 [MB] (25 MBps) [2024-11-20T11:04:55.575Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-20T11:04:55.575Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 11:04:55.509647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.322 [2024-11-20 11:04:55.509705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:06.322 [2024-11-20 11:04:55.509721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:06.322 [2024-11-20 11:04:55.509733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.322 [2024-11-20 11:04:55.510788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:06.322 [2024-11-20 11:04:55.516391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.322 [2024-11-20 11:04:55.516429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:06.322 [2024-11-20 11:04:55.516442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.582 ms 00:28:06.322 [2024-11-20 11:04:55.516454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.322 [2024-11-20 11:04:55.525950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.322 [2024-11-20 11:04:55.525989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:06.322 [2024-11-20 11:04:55.526002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.375 ms 00:28:06.322 [2024-11-20 11:04:55.526012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.322 [2024-11-20 11:04:55.549074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.322 [2024-11-20 11:04:55.549114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:06.322 [2024-11-20 11:04:55.549139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.081 ms 00:28:06.322 [2024-11-20 11:04:55.549150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.322 [2024-11-20 11:04:55.553986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.322 [2024-11-20 11:04:55.554026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:06.322 [2024-11-20 11:04:55.554037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.810 ms 00:28:06.322 [2024-11-20 11:04:55.554046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.580 [2024-11-20 11:04:55.588542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.580 [2024-11-20 11:04:55.588731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:06.580 [2024-11-20 11:04:55.588752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.513 ms 00:28:06.580 [2024-11-20 11:04:55.588764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.580 [2024-11-20 11:04:55.608434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.580 [2024-11-20 11:04:55.608589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:06.580 [2024-11-20 11:04:55.608623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.614 ms 00:28:06.580 [2024-11-20 11:04:55.608634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.580 [2024-11-20 11:04:55.729360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.580 [2024-11-20 11:04:55.729412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:06.580 [2024-11-20 11:04:55.729426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.861 ms 00:28:06.580 [2024-11-20 11:04:55.729443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.580 [2024-11-20 11:04:55.763776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.580 [2024-11-20 11:04:55.763812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:06.580 [2024-11-20 11:04:55.763824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.371 ms 00:28:06.580 [2024-11-20 11:04:55.763833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.580 [2024-11-20 11:04:55.799154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.580 [2024-11-20 11:04:55.799330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:06.580 [2024-11-20 11:04:55.799351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.342 ms 00:28:06.580 [2024-11-20 11:04:55.799361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.840 [2024-11-20 11:04:55.835129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.840 [2024-11-20 11:04:55.835167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:06.840 [2024-11-20 11:04:55.835180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.737 ms 00:28:06.840 [2024-11-20 11:04:55.835190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.840 [2024-11-20 11:04:55.870103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.840 [2024-11-20 11:04:55.870257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:06.840 [2024-11-20 11:04:55.870277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.898 ms 00:28:06.840 [2024-11-20 11:04:55.870288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.840 [2024-11-20 11:04:55.870428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:06.840 [2024-11-20 11:04:55.870451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106240 / 261120 wr_cnt: 1 state: open 00:28:06.840 [2024-11-20 11:04:55.870464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.870992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:06.840 [2024-11-20 11:04:55.871278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:06.841 [2024-11-20 11:04:55.871548] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:06.841 [2024-11-20 11:04:55.871557] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 690cb0a8-9167-4dbe-8ee2-0c9a3b18b8b9 00:28:06.841 [2024-11-20 11:04:55.871569] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106240 00:28:06.841 [2024-11-20 11:04:55.871584] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107200 00:28:06.841 [2024-11-20 11:04:55.871612] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106240 00:28:06.841 [2024-11-20 11:04:55.871623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:28:06.841 [2024-11-20 11:04:55.871633] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:06.841 [2024-11-20 11:04:55.871643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:06.841 [2024-11-20 11:04:55.871653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:06.841 [2024-11-20 11:04:55.871662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:06.841 [2024-11-20 11:04:55.871671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:06.841 [2024-11-20 11:04:55.871680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.841 [2024-11-20 11:04:55.871690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:06.841 [2024-11-20 11:04:55.871701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:28:06.841 [2024-11-20 11:04:55.871710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.841 [2024-11-20 11:04:55.890510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.841 [2024-11-20 11:04:55.890541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:06.841 [2024-11-20 11:04:55.890553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.795 ms 00:28:06.841 [2024-11-20 11:04:55.890562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.841 [2024-11-20 11:04:55.891127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:06.841 [2024-11-20 11:04:55.891145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:06.841 [2024-11-20 11:04:55.891156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:28:06.841 [2024-11-20 11:04:55.891166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.841 [2024-11-20 11:04:55.942207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.841 [2024-11-20 11:04:55.942240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:06.841 [2024-11-20 11:04:55.942253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.841 [2024-11-20 11:04:55.942264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.841 [2024-11-20 11:04:55.942315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.841 [2024-11-20 11:04:55.942327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:06.841 [2024-11-20 11:04:55.942336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.841 [2024-11-20 11:04:55.942346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.841 [2024-11-20 11:04:55.942410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.841 [2024-11-20 11:04:55.942423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:06.841 [2024-11-20 11:04:55.942433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.841 [2024-11-20 11:04:55.942443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.841 [2024-11-20 11:04:55.942459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.841 [2024-11-20 11:04:55.942469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:06.841 [2024-11-20 11:04:55.942479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.841 [2024-11-20 11:04:55.942488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:06.841 [2024-11-20 11:04:56.058952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:06.841 [2024-11-20 11:04:56.058998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:06.841 [2024-11-20 11:04:56.059012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:06.841 [2024-11-20 11:04:56.059022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.100 [2024-11-20 11:04:56.152796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.100 [2024-11-20 11:04:56.152843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:07.100 [2024-11-20 11:04:56.152857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.100 [2024-11-20 11:04:56.152867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.100 [2024-11-20 11:04:56.152977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.100 [2024-11-20 11:04:56.152989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:07.100 [2024-11-20 11:04:56.152999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.100 [2024-11-20 11:04:56.153009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.100 [2024-11-20 11:04:56.153046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.100 [2024-11-20 11:04:56.153057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:07.100 [2024-11-20 11:04:56.153067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.100 [2024-11-20 11:04:56.153077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.100 [2024-11-20 11:04:56.153178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.100 [2024-11-20 11:04:56.153194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:07.100 [2024-11-20 11:04:56.153204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.100 [2024-11-20 11:04:56.153214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.100 [2024-11-20 11:04:56.153247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.100 [2024-11-20 11:04:56.153260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:07.100 [2024-11-20 11:04:56.153270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.100 [2024-11-20 11:04:56.153279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.100 [2024-11-20 11:04:56.153316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.100 [2024-11-20 11:04:56.153331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:07.100 [2024-11-20 11:04:56.153341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.100 [2024-11-20 11:04:56.153351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.100 [2024-11-20 11:04:56.153391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:07.100 [2024-11-20 11:04:56.153402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:07.100 [2024-11-20 11:04:56.153411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:07.100 [2024-11-20 11:04:56.153421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:07.100 [2024-11-20 11:04:56.153532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 646.464 ms, result 0 00:28:08.476 00:28:08.476 00:28:08.476 11:04:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:10.379 11:04:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:10.379 [2024-11-20 11:04:59.266198] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:28:10.379 [2024-11-20 11:04:59.266302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82300 ] 00:28:10.379 [2024-11-20 11:04:59.442950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.379 [2024-11-20 11:04:59.551342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.947 [2024-11-20 11:04:59.898365] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:10.947 [2024-11-20 11:04:59.898694] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:10.947 [2024-11-20 11:05:00.058850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.058901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:10.948 [2024-11-20 11:05:00.058922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:10.948 [2024-11-20 11:05:00.058933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.058977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.058990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:10.948 [2024-11-20 11:05:00.059004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:10.948 [2024-11-20 11:05:00.059015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.059035] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:10.948 [2024-11-20 11:05:00.060029] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:10.948 [2024-11-20 11:05:00.060051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.060062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:10.948 [2024-11-20 11:05:00.060073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:28:10.948 [2024-11-20 11:05:00.060083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.061464] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:10.948 [2024-11-20 11:05:00.080565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.080737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:10.948 [2024-11-20 11:05:00.080775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.131 ms 00:28:10.948 [2024-11-20 11:05:00.080786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.080852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.080865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:10.948 [2024-11-20 11:05:00.080876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:10.948 [2024-11-20 11:05:00.080886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.087511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.087539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:10.948 [2024-11-20 11:05:00.087550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.564 ms 00:28:10.948 [2024-11-20 11:05:00.087560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.087656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.087670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:10.948 [2024-11-20 11:05:00.087680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:28:10.948 [2024-11-20 11:05:00.087689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.087726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.087737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:10.948 [2024-11-20 11:05:00.087746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:10.948 [2024-11-20 11:05:00.087755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.087776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:10.948 [2024-11-20 11:05:00.092694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.092856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:10.948 [2024-11-20 11:05:00.092975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:28:10.948 [2024-11-20 11:05:00.093018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.093072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.093104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:10.948 [2024-11-20 11:05:00.093134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:10.948 [2024-11-20 11:05:00.093219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.093303] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:10.948 [2024-11-20 11:05:00.093352] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:10.948 [2024-11-20 11:05:00.093425] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:10.948 [2024-11-20 11:05:00.093576] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:10.948 [2024-11-20 11:05:00.093727] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:10.948 [2024-11-20 11:05:00.093845] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:10.948 [2024-11-20 11:05:00.093897] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:10.948 [2024-11-20 11:05:00.093947] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:10.948 [2024-11-20 11:05:00.094062] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:10.948 [2024-11-20 11:05:00.094158] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:10.948 [2024-11-20 11:05:00.094193] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:10.948 [2024-11-20 11:05:00.094264] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:10.948 [2024-11-20 11:05:00.094297] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:10.948 [2024-11-20 11:05:00.094316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.094326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:10.948 [2024-11-20 11:05:00.094337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:28:10.948 [2024-11-20 11:05:00.094347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.094430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.948 [2024-11-20 11:05:00.094442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:10.948 [2024-11-20 11:05:00.094453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:10.948 [2024-11-20 11:05:00.094462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.948 [2024-11-20 11:05:00.094564] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:10.948 [2024-11-20 11:05:00.094583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:10.948 [2024-11-20 11:05:00.094605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:10.948 [2024-11-20 11:05:00.094616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.948 [2024-11-20 11:05:00.094627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:10.948 [2024-11-20 11:05:00.094636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:10.948 [2024-11-20 11:05:00.094646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:10.948 [2024-11-20 11:05:00.094655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:10.948 [2024-11-20 11:05:00.094664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:10.948 [2024-11-20 11:05:00.094673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:10.948 [2024-11-20 11:05:00.094683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:10.948 [2024-11-20 11:05:00.094693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:10.948 [2024-11-20 11:05:00.094702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:10.948 [2024-11-20 11:05:00.094711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:10.948 [2024-11-20 11:05:00.094720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:10.948 [2024-11-20 11:05:00.094738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.948 [2024-11-20 11:05:00.094748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:10.948 [2024-11-20 11:05:00.094757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:10.948 [2024-11-20 11:05:00.094766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.948 [2024-11-20 11:05:00.094776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:10.948 [2024-11-20 11:05:00.094786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:10.949 [2024-11-20 11:05:00.094795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:10.949 [2024-11-20 11:05:00.094805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:10.949 [2024-11-20 11:05:00.094815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:10.949 [2024-11-20 11:05:00.094825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:10.949 [2024-11-20 11:05:00.094834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:10.949 [2024-11-20 11:05:00.094843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:10.949 [2024-11-20 11:05:00.094852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:10.949 [2024-11-20 11:05:00.094861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:10.949 [2024-11-20 11:05:00.094871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:10.949 [2024-11-20 11:05:00.094880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:10.949 [2024-11-20 11:05:00.094890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:10.949 [2024-11-20 11:05:00.094899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:10.949 [2024-11-20 11:05:00.094908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:10.949 [2024-11-20 11:05:00.094917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:10.949 [2024-11-20 11:05:00.094926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:10.949 [2024-11-20 11:05:00.094935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:10.949 [2024-11-20 11:05:00.094944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:10.949 [2024-11-20 11:05:00.094953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:10.949 [2024-11-20 11:05:00.094962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.949 [2024-11-20 11:05:00.094971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:10.949 [2024-11-20 11:05:00.094980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:10.949 [2024-11-20 11:05:00.094989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.949 [2024-11-20 11:05:00.094998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:10.949 [2024-11-20 11:05:00.095007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:10.949 [2024-11-20 11:05:00.095017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:10.949 [2024-11-20 11:05:00.095026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:10.949 [2024-11-20 11:05:00.095036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:10.949 [2024-11-20 11:05:00.095045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:10.949 [2024-11-20 11:05:00.095054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:10.949 [2024-11-20 11:05:00.095063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:10.949 [2024-11-20 11:05:00.095072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:10.949 [2024-11-20 11:05:00.095081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:10.949 [2024-11-20 11:05:00.095092] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:10.949 [2024-11-20 11:05:00.095104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:10.949 [2024-11-20 11:05:00.095117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:10.949 [2024-11-20 11:05:00.095127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:10.949 [2024-11-20 11:05:00.095137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:10.949 [2024-11-20 11:05:00.095147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:10.949 [2024-11-20 11:05:00.095158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:10.949 [2024-11-20 11:05:00.095168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:10.949 [2024-11-20 11:05:00.095178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:10.949 [2024-11-20 11:05:00.095188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:10.949 [2024-11-20 11:05:00.095199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:10.949 [2024-11-20 11:05:00.095209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:10.949 [2024-11-20 11:05:00.095219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:10.949 [2024-11-20 11:05:00.095229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:10.949 [2024-11-20 11:05:00.095239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:10.949 [2024-11-20 11:05:00.095249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:10.949 [2024-11-20 11:05:00.095258] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:10.949 [2024-11-20 11:05:00.095272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:10.949 [2024-11-20 11:05:00.095285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:10.949 [2024-11-20 11:05:00.095295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:10.949 [2024-11-20 11:05:00.095305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:10.949 [2024-11-20 11:05:00.095316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:10.949 [2024-11-20 11:05:00.095326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.949 [2024-11-20 11:05:00.095336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:10.949 [2024-11-20 11:05:00.095346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:28:10.949 [2024-11-20 11:05:00.095356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.949 [2024-11-20 11:05:00.132427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.949 [2024-11-20 11:05:00.132462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:10.949 [2024-11-20 11:05:00.132475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.087 ms 00:28:10.949 [2024-11-20 11:05:00.132485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.949 [2024-11-20 11:05:00.132577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.949 [2024-11-20 11:05:00.132588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:10.949 [2024-11-20 11:05:00.132599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:10.949 [2024-11-20 11:05:00.132768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.202776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.202947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.209 [2024-11-20 11:05:00.202968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.027 ms 00:28:11.209 [2024-11-20 11:05:00.202979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.203014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.203025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:11.209 [2024-11-20 11:05:00.203036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:11.209 [2024-11-20 11:05:00.203051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.203524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.203538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:11.209 [2024-11-20 11:05:00.203549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:28:11.209 [2024-11-20 11:05:00.203559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.203692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.203707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:11.209 [2024-11-20 11:05:00.203717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:28:11.209 [2024-11-20 11:05:00.203735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.221189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.221350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:11.209 [2024-11-20 11:05:00.221376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.462 ms 00:28:11.209 [2024-11-20 11:05:00.221387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.239365] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:11.209 [2024-11-20 11:05:00.239402] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:11.209 [2024-11-20 11:05:00.239416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.239426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:11.209 [2024-11-20 11:05:00.239437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.959 ms 00:28:11.209 [2024-11-20 11:05:00.239446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.267353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.267400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:11.209 [2024-11-20 11:05:00.267413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.914 ms 00:28:11.209 [2024-11-20 11:05:00.267424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.285186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.285230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:11.209 [2024-11-20 11:05:00.285242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.744 ms 00:28:11.209 [2024-11-20 11:05:00.285251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.302078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.302110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:11.209 [2024-11-20 11:05:00.302121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.820 ms 00:28:11.209 [2024-11-20 11:05:00.302147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.302885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.302926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:11.209 [2024-11-20 11:05:00.302939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:28:11.209 [2024-11-20 11:05:00.302953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.382253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.382308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:11.209 [2024-11-20 11:05:00.382329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.407 ms 00:28:11.209 [2024-11-20 11:05:00.382339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.392387] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:11.209 [2024-11-20 11:05:00.394638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.394788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:11.209 [2024-11-20 11:05:00.394809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.273 ms 00:28:11.209 [2024-11-20 11:05:00.394820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.394899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.394913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:11.209 [2024-11-20 11:05:00.394925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:11.209 [2024-11-20 11:05:00.394938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.396404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.396440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:11.209 [2024-11-20 11:05:00.396452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:28:11.209 [2024-11-20 11:05:00.396462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.396488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.396500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:11.209 [2024-11-20 11:05:00.396511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:11.209 [2024-11-20 11:05:00.396521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.396560] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:11.209 [2024-11-20 11:05:00.396576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.396586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:11.209 [2024-11-20 11:05:00.396612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:11.209 [2024-11-20 11:05:00.396623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.430774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.430810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:11.209 [2024-11-20 11:05:00.430823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.187 ms 00:28:11.209 [2024-11-20 11:05:00.430838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.430906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.209 [2024-11-20 11:05:00.430917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:11.209 [2024-11-20 11:05:00.430928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:11.209 [2024-11-20 11:05:00.430937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.209 [2024-11-20 11:05:00.432007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.299 ms, result 0 00:28:12.584  [2024-11-20T11:05:02.773Z] Copying: 1312/1048576 [kB] (1312 kBps) [2024-11-20T11:05:03.706Z] Copying: 10240/1048576 [kB] (8928 kBps) [2024-11-20T11:05:04.640Z] Copying: 42/1024 [MB] (32 MBps) [2024-11-20T11:05:06.016Z] Copying: 75/1024 [MB] (33 MBps) [2024-11-20T11:05:06.949Z] Copying: 109/1024 [MB] (33 MBps) [2024-11-20T11:05:07.883Z] Copying: 142/1024 [MB] (33 MBps) [2024-11-20T11:05:08.819Z] Copying: 176/1024 [MB] (33 MBps) [2024-11-20T11:05:09.752Z] Copying: 209/1024 [MB] (33 MBps) [2024-11-20T11:05:10.684Z] Copying: 243/1024 [MB] (33 MBps) [2024-11-20T11:05:12.061Z] Copying: 276/1024 [MB] (33 MBps) [2024-11-20T11:05:12.666Z] Copying: 309/1024 [MB] (33 MBps) [2024-11-20T11:05:13.641Z] Copying: 343/1024 [MB] (33 MBps) [2024-11-20T11:05:15.016Z] Copying: 376/1024 [MB] (33 MBps) [2024-11-20T11:05:15.952Z] Copying: 409/1024 [MB] (33 MBps) [2024-11-20T11:05:16.890Z] Copying: 443/1024 [MB] (33 MBps) [2024-11-20T11:05:17.827Z] Copying: 477/1024 [MB] (33 MBps) [2024-11-20T11:05:18.764Z] Copying: 510/1024 [MB] (33 MBps) [2024-11-20T11:05:19.703Z] Copying: 544/1024 [MB] (33 MBps) [2024-11-20T11:05:20.640Z] Copying: 577/1024 [MB] (33 MBps) [2024-11-20T11:05:22.016Z] Copying: 611/1024 [MB] (33 MBps) [2024-11-20T11:05:22.952Z] Copying: 644/1024 [MB] (33 MBps) [2024-11-20T11:05:23.890Z] Copying: 677/1024 [MB] (33 MBps) [2024-11-20T11:05:24.827Z] Copying: 711/1024 [MB] (33 MBps) [2024-11-20T11:05:25.762Z] Copying: 743/1024 [MB] (32 MBps) [2024-11-20T11:05:26.700Z] Copying: 777/1024 [MB] (33 MBps) [2024-11-20T11:05:27.637Z] Copying: 810/1024 [MB] (33 MBps) [2024-11-20T11:05:29.016Z] Copying: 844/1024 [MB] (33 MBps) [2024-11-20T11:05:29.950Z] Copying: 877/1024 [MB] (33 MBps) [2024-11-20T11:05:30.887Z] Copying: 910/1024 [MB] (32 MBps) [2024-11-20T11:05:31.825Z] Copying: 942/1024 [MB] (32 MBps) [2024-11-20T11:05:32.830Z] Copying: 976/1024 [MB] (33 MBps) [2024-11-20T11:05:33.088Z] Copying: 1009/1024 [MB] (33 MBps) [2024-11-20T11:05:33.347Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-20 11:05:33.103581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.103676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:44.094 [2024-11-20 11:05:33.103715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:44.094 [2024-11-20 11:05:33.103732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.103768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:44.094 [2024-11-20 11:05:33.110282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.110347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:44.094 [2024-11-20 11:05:33.110366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.385 ms 00:28:44.094 [2024-11-20 11:05:33.110379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.110719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.110742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:44.094 [2024-11-20 11:05:33.110763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:28:44.094 [2024-11-20 11:05:33.110776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.123393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.123450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:44.094 [2024-11-20 11:05:33.123466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.615 ms 00:28:44.094 [2024-11-20 11:05:33.123477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.128422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.128456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:44.094 [2024-11-20 11:05:33.128468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.905 ms 00:28:44.094 [2024-11-20 11:05:33.128484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.162574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.162753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:44.094 [2024-11-20 11:05:33.162776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.089 ms 00:28:44.094 [2024-11-20 11:05:33.162786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.182310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.182463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:44.094 [2024-11-20 11:05:33.182509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.519 ms 00:28:44.094 [2024-11-20 11:05:33.182521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.184645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.184684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:44.094 [2024-11-20 11:05:33.184698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.030 ms 00:28:44.094 [2024-11-20 11:05:33.184708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.218719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.218761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:44.094 [2024-11-20 11:05:33.218774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.042 ms 00:28:44.094 [2024-11-20 11:05:33.218783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.252056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.252092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:44.094 [2024-11-20 11:05:33.252117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.293 ms 00:28:44.094 [2024-11-20 11:05:33.252127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.285315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.285347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:44.094 [2024-11-20 11:05:33.285359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.206 ms 00:28:44.094 [2024-11-20 11:05:33.285368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.318785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.094 [2024-11-20 11:05:33.318826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:44.094 [2024-11-20 11:05:33.318838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.402 ms 00:28:44.094 [2024-11-20 11:05:33.318847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.094 [2024-11-20 11:05:33.318882] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:44.094 [2024-11-20 11:05:33.318896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:44.094 [2024-11-20 11:05:33.318907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:44.094 [2024-11-20 11:05:33.318917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.318928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.318938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.318948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.318958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.318968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.318978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.318988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.318998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:44.094 [2024-11-20 11:05:33.319566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:44.095 [2024-11-20 11:05:33.319894] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:44.095 [2024-11-20 11:05:33.319903] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 690cb0a8-9167-4dbe-8ee2-0c9a3b18b8b9 00:28:44.095 [2024-11-20 11:05:33.319913] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:44.095 [2024-11-20 11:05:33.319922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158400 00:28:44.095 [2024-11-20 11:05:33.319931] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156416 00:28:44.095 [2024-11-20 11:05:33.319944] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:28:44.095 [2024-11-20 11:05:33.319953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:44.095 [2024-11-20 11:05:33.319962] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:44.095 [2024-11-20 11:05:33.319971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:44.095 [2024-11-20 11:05:33.319988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:44.095 [2024-11-20 11:05:33.319997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:44.095 [2024-11-20 11:05:33.320006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.095 [2024-11-20 11:05:33.320015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:44.095 [2024-11-20 11:05:33.320025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.126 ms 00:28:44.095 [2024-11-20 11:05:33.320034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.095 [2024-11-20 11:05:33.338972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.095 [2024-11-20 11:05:33.339007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:44.095 [2024-11-20 11:05:33.339019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.923 ms 00:28:44.095 [2024-11-20 11:05:33.339044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.095 [2024-11-20 11:05:33.339559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:44.095 [2024-11-20 11:05:33.339572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:44.095 [2024-11-20 11:05:33.339583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:28:44.095 [2024-11-20 11:05:33.339606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.389147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.389180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:44.354 [2024-11-20 11:05:33.389192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.389201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.389251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.389261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:44.354 [2024-11-20 11:05:33.389271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.389280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.389339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.389356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:44.354 [2024-11-20 11:05:33.389367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.389376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.389392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.389401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:44.354 [2024-11-20 11:05:33.389410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.389419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.505222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.505272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:44.354 [2024-11-20 11:05:33.505286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.505295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.603074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.603116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:44.354 [2024-11-20 11:05:33.603129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.603155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.603241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.603253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:44.354 [2024-11-20 11:05:33.603269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.603279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.603318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.603330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:44.354 [2024-11-20 11:05:33.603340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.603350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.603460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.603473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:44.354 [2024-11-20 11:05:33.603483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.603498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.603531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.603543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:44.354 [2024-11-20 11:05:33.603554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.603563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.603600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.603799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:44.354 [2024-11-20 11:05:33.603846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.603883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.603970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.354 [2024-11-20 11:05:33.604007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:44.354 [2024-11-20 11:05:33.604038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.354 [2024-11-20 11:05:33.604068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.354 [2024-11-20 11:05:33.604207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 501.418 ms, result 0 00:28:45.732 00:28:45.732 00:28:45.732 11:05:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:47.110 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:47.110 11:05:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:47.110 [2024-11-20 11:05:36.303165] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:28:47.110 [2024-11-20 11:05:36.303298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82676 ] 00:28:47.371 [2024-11-20 11:05:36.482994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.371 [2024-11-20 11:05:36.589426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.941 [2024-11-20 11:05:36.934225] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.941 [2024-11-20 11:05:36.934290] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.941 [2024-11-20 11:05:37.095005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.095243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:47.941 [2024-11-20 11:05:37.095278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:47.941 [2024-11-20 11:05:37.095289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.095354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.095369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:47.941 [2024-11-20 11:05:37.095384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:28:47.941 [2024-11-20 11:05:37.095395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.095418] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:47.941 [2024-11-20 11:05:37.096420] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:47.941 [2024-11-20 11:05:37.096447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.096458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:47.941 [2024-11-20 11:05:37.096469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:28:47.941 [2024-11-20 11:05:37.096479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.097959] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:47.941 [2024-11-20 11:05:37.115642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.115681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:47.941 [2024-11-20 11:05:37.115695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.713 ms 00:28:47.941 [2024-11-20 11:05:37.115721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.115784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.115796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:47.941 [2024-11-20 11:05:37.115807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:47.941 [2024-11-20 11:05:37.115816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.122780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.122930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:47.941 [2024-11-20 11:05:37.123063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.904 ms 00:28:47.941 [2024-11-20 11:05:37.123100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.123209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.123302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:47.941 [2024-11-20 11:05:37.123338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:47.941 [2024-11-20 11:05:37.123369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.123477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.123516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:47.941 [2024-11-20 11:05:37.123547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:47.941 [2024-11-20 11:05:37.123809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.123870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:47.941 [2024-11-20 11:05:37.128635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.128764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:47.941 [2024-11-20 11:05:37.128911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.779 ms 00:28:47.941 [2024-11-20 11:05:37.128934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.941 [2024-11-20 11:05:37.128974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.941 [2024-11-20 11:05:37.128985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:47.942 [2024-11-20 11:05:37.128995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:47.942 [2024-11-20 11:05:37.129005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.942 [2024-11-20 11:05:37.129059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:47.942 [2024-11-20 11:05:37.129083] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:47.942 [2024-11-20 11:05:37.129120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:47.942 [2024-11-20 11:05:37.129141] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:47.942 [2024-11-20 11:05:37.129230] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:47.942 [2024-11-20 11:05:37.129243] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:47.942 [2024-11-20 11:05:37.129256] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:47.942 [2024-11-20 11:05:37.129269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129281] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129292] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:47.942 [2024-11-20 11:05:37.129302] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:47.942 [2024-11-20 11:05:37.129312] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:47.942 [2024-11-20 11:05:37.129322] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:47.942 [2024-11-20 11:05:37.129336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.942 [2024-11-20 11:05:37.129346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:47.942 [2024-11-20 11:05:37.129357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:28:47.942 [2024-11-20 11:05:37.129367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.942 [2024-11-20 11:05:37.129437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.942 [2024-11-20 11:05:37.129447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:47.942 [2024-11-20 11:05:37.129458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:47.942 [2024-11-20 11:05:37.129467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.942 [2024-11-20 11:05:37.129559] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:47.942 [2024-11-20 11:05:37.129576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:47.942 [2024-11-20 11:05:37.129587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:47.942 [2024-11-20 11:05:37.129638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:47.942 [2024-11-20 11:05:37.129666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.942 [2024-11-20 11:05:37.129685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:47.942 [2024-11-20 11:05:37.129695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:47.942 [2024-11-20 11:05:37.129704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.942 [2024-11-20 11:05:37.129713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:47.942 [2024-11-20 11:05:37.129723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:47.942 [2024-11-20 11:05:37.129741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:47.942 [2024-11-20 11:05:37.129760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:47.942 [2024-11-20 11:05:37.129787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:47.942 [2024-11-20 11:05:37.129814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:47.942 [2024-11-20 11:05:37.129842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:47.942 [2024-11-20 11:05:37.129870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.942 [2024-11-20 11:05:37.129887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:47.942 [2024-11-20 11:05:37.129899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.942 [2024-11-20 11:05:37.129917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:47.942 [2024-11-20 11:05:37.129925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:47.942 [2024-11-20 11:05:37.129934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.942 [2024-11-20 11:05:37.129943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:47.942 [2024-11-20 11:05:37.129952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:47.942 [2024-11-20 11:05:37.129961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:47.942 [2024-11-20 11:05:37.129979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:47.942 [2024-11-20 11:05:37.129988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.942 [2024-11-20 11:05:37.129998] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:47.942 [2024-11-20 11:05:37.130009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:47.942 [2024-11-20 11:05:37.130018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.942 [2024-11-20 11:05:37.130029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.942 [2024-11-20 11:05:37.130038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:47.942 [2024-11-20 11:05:37.130048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:47.942 [2024-11-20 11:05:37.130057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:47.942 [2024-11-20 11:05:37.130067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:47.942 [2024-11-20 11:05:37.130076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:47.942 [2024-11-20 11:05:37.130085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:47.942 [2024-11-20 11:05:37.130095] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:47.942 [2024-11-20 11:05:37.130107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.942 [2024-11-20 11:05:37.130119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:47.942 [2024-11-20 11:05:37.130129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:47.942 [2024-11-20 11:05:37.130139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:47.942 [2024-11-20 11:05:37.130149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:47.942 [2024-11-20 11:05:37.130159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:47.942 [2024-11-20 11:05:37.130169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:47.942 [2024-11-20 11:05:37.130179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:47.942 [2024-11-20 11:05:37.130189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:47.942 [2024-11-20 11:05:37.130199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:47.942 [2024-11-20 11:05:37.130209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:47.942 [2024-11-20 11:05:37.130219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:47.942 [2024-11-20 11:05:37.130229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:47.942 [2024-11-20 11:05:37.130239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:47.942 [2024-11-20 11:05:37.130249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:47.942 [2024-11-20 11:05:37.130259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:47.942 [2024-11-20 11:05:37.130274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.942 [2024-11-20 11:05:37.130285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:47.942 [2024-11-20 11:05:37.130295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:47.942 [2024-11-20 11:05:37.130305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:47.943 [2024-11-20 11:05:37.130316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:47.943 [2024-11-20 11:05:37.130327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.943 [2024-11-20 11:05:37.130338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:47.943 [2024-11-20 11:05:37.130348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:28:47.943 [2024-11-20 11:05:37.130358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.943 [2024-11-20 11:05:37.167638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.943 [2024-11-20 11:05:37.167827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:47.943 [2024-11-20 11:05:37.167864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.296 ms 00:28:47.943 [2024-11-20 11:05:37.167875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.943 [2024-11-20 11:05:37.167958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.943 [2024-11-20 11:05:37.167968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:47.943 [2024-11-20 11:05:37.167979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:47.943 [2024-11-20 11:05:37.167989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.243608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.243648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:48.203 [2024-11-20 11:05:37.243663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.687 ms 00:28:48.203 [2024-11-20 11:05:37.243690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.243732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.243743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:48.203 [2024-11-20 11:05:37.243754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:48.203 [2024-11-20 11:05:37.243779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.244276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.244291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:48.203 [2024-11-20 11:05:37.244302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:28:48.203 [2024-11-20 11:05:37.244312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.244425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.244438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:48.203 [2024-11-20 11:05:37.244449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:28:48.203 [2024-11-20 11:05:37.244464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.263941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.263977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:48.203 [2024-11-20 11:05:37.263993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.489 ms 00:28:48.203 [2024-11-20 11:05:37.264020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.282653] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:48.203 [2024-11-20 11:05:37.282692] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:48.203 [2024-11-20 11:05:37.282707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.282717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:48.203 [2024-11-20 11:05:37.282729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.615 ms 00:28:48.203 [2024-11-20 11:05:37.282738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.311540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.311606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:48.203 [2024-11-20 11:05:37.311636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.805 ms 00:28:48.203 [2024-11-20 11:05:37.311647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.329612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.329649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:48.203 [2024-11-20 11:05:37.329661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.950 ms 00:28:48.203 [2024-11-20 11:05:37.329687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.347256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.347291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:48.203 [2024-11-20 11:05:37.347304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.559 ms 00:28:48.203 [2024-11-20 11:05:37.347314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.348081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.348108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:48.203 [2024-11-20 11:05:37.348120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:28:48.203 [2024-11-20 11:05:37.348135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.431463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.431513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:48.203 [2024-11-20 11:05:37.431535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.440 ms 00:28:48.203 [2024-11-20 11:05:37.431546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.442050] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:48.203 [2024-11-20 11:05:37.444853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.444885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:48.203 [2024-11-20 11:05:37.444898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.263 ms 00:28:48.203 [2024-11-20 11:05:37.444908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.444992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.445004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:48.203 [2024-11-20 11:05:37.445015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:48.203 [2024-11-20 11:05:37.445027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.446012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.446156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:48.203 [2024-11-20 11:05:37.446233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:28:48.203 [2024-11-20 11:05:37.446269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.446323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.203 [2024-11-20 11:05:37.446400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:48.203 [2024-11-20 11:05:37.446436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:48.203 [2024-11-20 11:05:37.446466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.203 [2024-11-20 11:05:37.446573] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:48.203 [2024-11-20 11:05:37.446679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.204 [2024-11-20 11:05:37.446716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:48.204 [2024-11-20 11:05:37.446780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:28:48.204 [2024-11-20 11:05:37.446814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.462 [2024-11-20 11:05:37.482044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.462 [2024-11-20 11:05:37.482187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:48.462 [2024-11-20 11:05:37.482207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.234 ms 00:28:48.462 [2024-11-20 11:05:37.482241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.462 [2024-11-20 11:05:37.482372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:48.462 [2024-11-20 11:05:37.482386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:48.462 [2024-11-20 11:05:37.482397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:48.462 [2024-11-20 11:05:37.482407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:48.462 [2024-11-20 11:05:37.483531] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.709 ms, result 0 00:28:49.843  [2024-11-20T11:05:40.035Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T11:05:40.973Z] Copying: 53/1024 [MB] (26 MBps) [2024-11-20T11:05:41.912Z] Copying: 80/1024 [MB] (26 MBps) [2024-11-20T11:05:42.851Z] Copying: 106/1024 [MB] (26 MBps) [2024-11-20T11:05:43.791Z] Copying: 133/1024 [MB] (26 MBps) [2024-11-20T11:05:44.729Z] Copying: 159/1024 [MB] (26 MBps) [2024-11-20T11:05:46.110Z] Copying: 186/1024 [MB] (26 MBps) [2024-11-20T11:05:46.676Z] Copying: 212/1024 [MB] (26 MBps) [2024-11-20T11:05:48.052Z] Copying: 239/1024 [MB] (26 MBps) [2024-11-20T11:05:48.988Z] Copying: 265/1024 [MB] (26 MBps) [2024-11-20T11:05:49.926Z] Copying: 292/1024 [MB] (26 MBps) [2024-11-20T11:05:50.864Z] Copying: 319/1024 [MB] (26 MBps) [2024-11-20T11:05:51.867Z] Copying: 345/1024 [MB] (26 MBps) [2024-11-20T11:05:52.798Z] Copying: 372/1024 [MB] (26 MBps) [2024-11-20T11:05:53.729Z] Copying: 399/1024 [MB] (26 MBps) [2024-11-20T11:05:54.659Z] Copying: 425/1024 [MB] (26 MBps) [2024-11-20T11:05:56.030Z] Copying: 452/1024 [MB] (26 MBps) [2024-11-20T11:05:56.963Z] Copying: 478/1024 [MB] (26 MBps) [2024-11-20T11:05:57.894Z] Copying: 505/1024 [MB] (26 MBps) [2024-11-20T11:05:58.826Z] Copying: 531/1024 [MB] (26 MBps) [2024-11-20T11:05:59.757Z] Copying: 558/1024 [MB] (26 MBps) [2024-11-20T11:06:00.691Z] Copying: 585/1024 [MB] (26 MBps) [2024-11-20T11:06:02.069Z] Copying: 611/1024 [MB] (26 MBps) [2024-11-20T11:06:03.006Z] Copying: 638/1024 [MB] (26 MBps) [2024-11-20T11:06:03.944Z] Copying: 665/1024 [MB] (26 MBps) [2024-11-20T11:06:04.879Z] Copying: 692/1024 [MB] (26 MBps) [2024-11-20T11:06:05.830Z] Copying: 718/1024 [MB] (26 MBps) [2024-11-20T11:06:06.785Z] Copying: 745/1024 [MB] (26 MBps) [2024-11-20T11:06:07.718Z] Copying: 771/1024 [MB] (26 MBps) [2024-11-20T11:06:08.653Z] Copying: 798/1024 [MB] (26 MBps) [2024-11-20T11:06:10.027Z] Copying: 825/1024 [MB] (26 MBps) [2024-11-20T11:06:10.960Z] Copying: 851/1024 [MB] (25 MBps) [2024-11-20T11:06:11.894Z] Copying: 877/1024 [MB] (26 MBps) [2024-11-20T11:06:12.829Z] Copying: 902/1024 [MB] (25 MBps) [2024-11-20T11:06:13.764Z] Copying: 928/1024 [MB] (25 MBps) [2024-11-20T11:06:14.698Z] Copying: 954/1024 [MB] (26 MBps) [2024-11-20T11:06:15.631Z] Copying: 980/1024 [MB] (25 MBps) [2024-11-20T11:06:16.567Z] Copying: 1006/1024 [MB] (25 MBps) [2024-11-20T11:06:16.567Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 11:06:16.317894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.317960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:27.314 [2024-11-20 11:06:16.317981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:27.314 [2024-11-20 11:06:16.317995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.318022] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:27.314 [2024-11-20 11:06:16.324160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.324328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:27.314 [2024-11-20 11:06:16.324446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:29:27.314 [2024-11-20 11:06:16.324493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.324784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.324839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:27.314 [2024-11-20 11:06:16.324881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:29:27.314 [2024-11-20 11:06:16.324985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.328649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.328793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:27.314 [2024-11-20 11:06:16.328918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.585 ms 00:29:27.314 [2024-11-20 11:06:16.328940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.334642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.334673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:27.314 [2024-11-20 11:06:16.334684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.675 ms 00:29:27.314 [2024-11-20 11:06:16.334693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.369100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.369257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:27.314 [2024-11-20 11:06:16.369294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.406 ms 00:29:27.314 [2024-11-20 11:06:16.369304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.388721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.388758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:27.314 [2024-11-20 11:06:16.388771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.411 ms 00:29:27.314 [2024-11-20 11:06:16.388796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.390916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.390961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:27.314 [2024-11-20 11:06:16.390974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:29:27.314 [2024-11-20 11:06:16.390984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.425532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.425568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:27.314 [2024-11-20 11:06:16.425580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.586 ms 00:29:27.314 [2024-11-20 11:06:16.425589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.460084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.460226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:27.314 [2024-11-20 11:06:16.460245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.486 ms 00:29:27.314 [2024-11-20 11:06:16.460271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.493536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.314 [2024-11-20 11:06:16.493571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:27.314 [2024-11-20 11:06:16.493583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.284 ms 00:29:27.314 [2024-11-20 11:06:16.493604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.314 [2024-11-20 11:06:16.527448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.315 [2024-11-20 11:06:16.527604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:27.315 [2024-11-20 11:06:16.527624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.808 ms 00:29:27.315 [2024-11-20 11:06:16.527634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.315 [2024-11-20 11:06:16.527669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:27.315 [2024-11-20 11:06:16.527684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:27.315 [2024-11-20 11:06:16.527702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:27.315 [2024-11-20 11:06:16.527713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.527999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:27.315 [2024-11-20 11:06:16.528546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:27.316 [2024-11-20 11:06:16.528744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:27.316 [2024-11-20 11:06:16.528758] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 690cb0a8-9167-4dbe-8ee2-0c9a3b18b8b9 00:29:27.316 [2024-11-20 11:06:16.528769] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:27.316 [2024-11-20 11:06:16.528779] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:27.316 [2024-11-20 11:06:16.528789] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:27.316 [2024-11-20 11:06:16.528799] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:27.316 [2024-11-20 11:06:16.528808] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:27.316 [2024-11-20 11:06:16.528818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:27.316 [2024-11-20 11:06:16.528839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:27.316 [2024-11-20 11:06:16.528848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:27.316 [2024-11-20 11:06:16.528867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:27.316 [2024-11-20 11:06:16.528877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.316 [2024-11-20 11:06:16.528887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:27.316 [2024-11-20 11:06:16.528897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:29:27.316 [2024-11-20 11:06:16.528907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.316 [2024-11-20 11:06:16.548084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.316 [2024-11-20 11:06:16.548118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:27.316 [2024-11-20 11:06:16.548129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.156 ms 00:29:27.316 [2024-11-20 11:06:16.548139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.316 [2024-11-20 11:06:16.548653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.316 [2024-11-20 11:06:16.548665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:27.316 [2024-11-20 11:06:16.548680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:29:27.316 [2024-11-20 11:06:16.548690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.596075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.596227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:27.575 [2024-11-20 11:06:16.596264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.596274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.596326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.596337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:27.575 [2024-11-20 11:06:16.596354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.596364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.596429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.596443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:27.575 [2024-11-20 11:06:16.596454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.596464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.596480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.596490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:27.575 [2024-11-20 11:06:16.596500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.596514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.713324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.713385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:27.575 [2024-11-20 11:06:16.713399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.713425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.810020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.810198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:27.575 [2024-11-20 11:06:16.810237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.810254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.810341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.810353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:27.575 [2024-11-20 11:06:16.810364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.810374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.810412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.810423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:27.575 [2024-11-20 11:06:16.810433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.810443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.810568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.810582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:27.575 [2024-11-20 11:06:16.810615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.810627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.810665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.810678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:27.575 [2024-11-20 11:06:16.810688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.810698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.810739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.810750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:27.575 [2024-11-20 11:06:16.810761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.810770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.810811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.575 [2024-11-20 11:06:16.810822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:27.575 [2024-11-20 11:06:16.810833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.575 [2024-11-20 11:06:16.810843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.575 [2024-11-20 11:06:16.810957] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.837 ms, result 0 00:29:28.948 00:29:28.948 00:29:28.948 11:06:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:30.321 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:30.321 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:30.321 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:30.321 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:30.321 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:30.321 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:30.579 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:30.579 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:30.579 Process with pid 80900 is not found 00:29:30.579 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80900 00:29:30.579 11:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80900 ']' 00:29:30.579 11:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80900 00:29:30.579 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80900) - No such process 00:29:30.579 11:06:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80900 is not found' 00:29:30.579 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:30.837 Remove shared memory files 00:29:30.837 11:06:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:30.837 11:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:30.837 11:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:30.837 11:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:30.837 11:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:30.837 11:06:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:30.837 11:06:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:30.837 ************************************ 00:29:30.837 END TEST ftl_dirty_shutdown 00:29:30.837 ************************************ 00:29:30.837 00:29:30.837 real 3m33.686s 00:29:30.837 user 4m2.594s 00:29:30.837 sys 0m36.798s 00:29:30.837 11:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.837 11:06:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.837 11:06:20 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:30.837 11:06:20 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:30.837 11:06:20 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.837 11:06:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:30.837 ************************************ 00:29:30.837 START TEST ftl_upgrade_shutdown 00:29:30.837 ************************************ 00:29:30.837 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:31.148 * Looking for test storage... 00:29:31.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.148 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:31.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.149 --rc genhtml_branch_coverage=1 00:29:31.149 --rc genhtml_function_coverage=1 00:29:31.149 --rc genhtml_legend=1 00:29:31.149 --rc geninfo_all_blocks=1 00:29:31.149 --rc geninfo_unexecuted_blocks=1 00:29:31.149 00:29:31.149 ' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:31.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.149 --rc genhtml_branch_coverage=1 00:29:31.149 --rc genhtml_function_coverage=1 00:29:31.149 --rc genhtml_legend=1 00:29:31.149 --rc geninfo_all_blocks=1 00:29:31.149 --rc geninfo_unexecuted_blocks=1 00:29:31.149 00:29:31.149 ' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:31.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.149 --rc genhtml_branch_coverage=1 00:29:31.149 --rc genhtml_function_coverage=1 00:29:31.149 --rc genhtml_legend=1 00:29:31.149 --rc geninfo_all_blocks=1 00:29:31.149 --rc geninfo_unexecuted_blocks=1 00:29:31.149 00:29:31.149 ' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:31.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.149 --rc genhtml_branch_coverage=1 00:29:31.149 --rc genhtml_function_coverage=1 00:29:31.149 --rc genhtml_legend=1 00:29:31.149 --rc geninfo_all_blocks=1 00:29:31.149 --rc geninfo_unexecuted_blocks=1 00:29:31.149 00:29:31.149 ' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83184 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83184 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83184 ']' 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.149 11:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:31.433 [2024-11-20 11:06:20.466665] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:29:31.433 [2024-11-20 11:06:20.466981] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83184 ] 00:29:31.433 [2024-11-20 11:06:20.651866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.691 [2024-11-20 11:06:20.758560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:32.625 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:32.883 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:32.883 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:32.883 11:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:32.883 11:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:32.883 11:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:32.883 11:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:32.883 11:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:32.883 11:06:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:32.883 11:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:32.883 { 00:29:32.883 "name": "basen1", 00:29:32.883 "aliases": [ 00:29:32.883 "e956dd47-1a19-4278-b241-c3eea10bb1d9" 00:29:32.883 ], 00:29:32.883 "product_name": "NVMe disk", 00:29:32.883 "block_size": 4096, 00:29:32.883 "num_blocks": 1310720, 00:29:32.883 "uuid": "e956dd47-1a19-4278-b241-c3eea10bb1d9", 00:29:32.883 "numa_id": -1, 00:29:32.883 "assigned_rate_limits": { 00:29:32.883 "rw_ios_per_sec": 0, 00:29:32.883 "rw_mbytes_per_sec": 0, 00:29:32.883 "r_mbytes_per_sec": 0, 00:29:32.883 "w_mbytes_per_sec": 0 00:29:32.883 }, 00:29:32.883 "claimed": true, 00:29:32.883 "claim_type": "read_many_write_one", 00:29:32.883 "zoned": false, 00:29:32.883 "supported_io_types": { 00:29:32.883 "read": true, 00:29:32.883 "write": true, 00:29:32.883 "unmap": true, 00:29:32.883 "flush": true, 00:29:32.883 "reset": true, 00:29:32.883 "nvme_admin": true, 00:29:32.883 "nvme_io": true, 00:29:32.883 "nvme_io_md": false, 00:29:32.883 "write_zeroes": true, 00:29:32.883 "zcopy": false, 00:29:32.883 "get_zone_info": false, 00:29:32.883 "zone_management": false, 00:29:32.883 "zone_append": false, 00:29:32.883 "compare": true, 00:29:32.883 "compare_and_write": false, 00:29:32.883 "abort": true, 00:29:32.883 "seek_hole": false, 00:29:32.883 "seek_data": false, 00:29:32.883 "copy": true, 00:29:32.883 "nvme_iov_md": false 00:29:32.883 }, 00:29:32.883 "driver_specific": { 00:29:32.883 "nvme": [ 00:29:32.883 { 00:29:32.883 "pci_address": "0000:00:11.0", 00:29:32.883 "trid": { 00:29:32.883 "trtype": "PCIe", 00:29:32.883 "traddr": "0000:00:11.0" 00:29:32.883 }, 00:29:32.883 "ctrlr_data": { 00:29:32.883 "cntlid": 0, 00:29:32.883 "vendor_id": "0x1b36", 00:29:32.883 "model_number": "QEMU NVMe Ctrl", 00:29:32.883 "serial_number": "12341", 00:29:32.883 "firmware_revision": "8.0.0", 00:29:32.884 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:32.884 "oacs": { 00:29:32.884 "security": 0, 00:29:32.884 "format": 1, 00:29:32.884 "firmware": 0, 00:29:32.884 "ns_manage": 1 00:29:32.884 }, 00:29:32.884 "multi_ctrlr": false, 00:29:32.884 "ana_reporting": false 00:29:32.884 }, 00:29:32.884 "vs": { 00:29:32.884 "nvme_version": "1.4" 00:29:32.884 }, 00:29:32.884 "ns_data": { 00:29:32.884 "id": 1, 00:29:32.884 "can_share": false 00:29:32.884 } 00:29:32.884 } 00:29:32.884 ], 00:29:32.884 "mp_policy": "active_passive" 00:29:32.884 } 00:29:32.884 } 00:29:32.884 ]' 00:29:32.884 11:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0d3d0a42-a2eb-4233-89cc-d67fe8cfaf8b 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:33.141 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d3d0a42-a2eb-4233-89cc-d67fe8cfaf8b 00:29:33.399 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:33.656 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1f321274-087d-49f9-8633-9bbc4207f259 00:29:33.656 11:06:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1f321274-087d-49f9-8633-9bbc4207f259 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=710f5e73-cbf8-48c5-898b-1eb2da8e471a 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 710f5e73-cbf8-48c5-898b-1eb2da8e471a ]] 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 710f5e73-cbf8-48c5-898b-1eb2da8e471a 5120 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=710f5e73-cbf8-48c5-898b-1eb2da8e471a 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 710f5e73-cbf8-48c5-898b-1eb2da8e471a 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=710f5e73-cbf8-48c5-898b-1eb2da8e471a 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:33.914 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 710f5e73-cbf8-48c5-898b-1eb2da8e471a 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:34.172 { 00:29:34.172 "name": "710f5e73-cbf8-48c5-898b-1eb2da8e471a", 00:29:34.172 "aliases": [ 00:29:34.172 "lvs/basen1p0" 00:29:34.172 ], 00:29:34.172 "product_name": "Logical Volume", 00:29:34.172 "block_size": 4096, 00:29:34.172 "num_blocks": 5242880, 00:29:34.172 "uuid": "710f5e73-cbf8-48c5-898b-1eb2da8e471a", 00:29:34.172 "assigned_rate_limits": { 00:29:34.172 "rw_ios_per_sec": 0, 00:29:34.172 "rw_mbytes_per_sec": 0, 00:29:34.172 "r_mbytes_per_sec": 0, 00:29:34.172 "w_mbytes_per_sec": 0 00:29:34.172 }, 00:29:34.172 "claimed": false, 00:29:34.172 "zoned": false, 00:29:34.172 "supported_io_types": { 00:29:34.172 "read": true, 00:29:34.172 "write": true, 00:29:34.172 "unmap": true, 00:29:34.172 "flush": false, 00:29:34.172 "reset": true, 00:29:34.172 "nvme_admin": false, 00:29:34.172 "nvme_io": false, 00:29:34.172 "nvme_io_md": false, 00:29:34.172 "write_zeroes": true, 00:29:34.172 "zcopy": false, 00:29:34.172 "get_zone_info": false, 00:29:34.172 "zone_management": false, 00:29:34.172 "zone_append": false, 00:29:34.172 "compare": false, 00:29:34.172 "compare_and_write": false, 00:29:34.172 "abort": false, 00:29:34.172 "seek_hole": true, 00:29:34.172 "seek_data": true, 00:29:34.172 "copy": false, 00:29:34.172 "nvme_iov_md": false 00:29:34.172 }, 00:29:34.172 "driver_specific": { 00:29:34.172 "lvol": { 00:29:34.172 "lvol_store_uuid": "1f321274-087d-49f9-8633-9bbc4207f259", 00:29:34.172 "base_bdev": "basen1", 00:29:34.172 "thin_provision": true, 00:29:34.172 "num_allocated_clusters": 0, 00:29:34.172 "snapshot": false, 00:29:34.172 "clone": false, 00:29:34.172 "esnap_clone": false 00:29:34.172 } 00:29:34.172 } 00:29:34.172 } 00:29:34.172 ]' 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:34.172 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:34.430 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:34.430 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:34.430 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:34.688 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:34.689 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:34.689 11:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 710f5e73-cbf8-48c5-898b-1eb2da8e471a -c cachen1p0 --l2p_dram_limit 2 00:29:34.689 [2024-11-20 11:06:23.920676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.920724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:34.689 [2024-11-20 11:06:23.920743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:34.689 [2024-11-20 11:06:23.920753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.920812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.920824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:34.689 [2024-11-20 11:06:23.920836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:34.689 [2024-11-20 11:06:23.920845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.920867] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:34.689 [2024-11-20 11:06:23.921822] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:34.689 [2024-11-20 11:06:23.921852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.921863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:34.689 [2024-11-20 11:06:23.921877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.987 ms 00:29:34.689 [2024-11-20 11:06:23.921886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.921930] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID b3a5580e-a5cf-4819-83b7-f8caa8f926c3 00:29:34.689 [2024-11-20 11:06:23.923400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.923563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:34.689 [2024-11-20 11:06:23.923584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:34.689 [2024-11-20 11:06:23.923608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.931033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.931066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:34.689 [2024-11-20 11:06:23.931140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.342 ms 00:29:34.689 [2024-11-20 11:06:23.931152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.931196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.931211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:34.689 [2024-11-20 11:06:23.931222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:34.689 [2024-11-20 11:06:23.931238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.931295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.931310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:34.689 [2024-11-20 11:06:23.931320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:34.689 [2024-11-20 11:06:23.931338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.931363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:34.689 [2024-11-20 11:06:23.935875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.935912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:34.689 [2024-11-20 11:06:23.935928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.524 ms 00:29:34.689 [2024-11-20 11:06:23.935937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.935967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.935977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:34.689 [2024-11-20 11:06:23.935989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:34.689 [2024-11-20 11:06:23.935999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.936041] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:34.689 [2024-11-20 11:06:23.936157] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:34.689 [2024-11-20 11:06:23.936176] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:34.689 [2024-11-20 11:06:23.936189] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:34.689 [2024-11-20 11:06:23.936204] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:34.689 [2024-11-20 11:06:23.936215] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:34.689 [2024-11-20 11:06:23.936228] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:34.689 [2024-11-20 11:06:23.936238] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:34.689 [2024-11-20 11:06:23.936252] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:34.689 [2024-11-20 11:06:23.936262] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:34.689 [2024-11-20 11:06:23.936274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.936283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:34.689 [2024-11-20 11:06:23.936295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:29:34.689 [2024-11-20 11:06:23.936305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.936374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.689 [2024-11-20 11:06:23.936385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:34.689 [2024-11-20 11:06:23.936398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:34.689 [2024-11-20 11:06:23.936416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.689 [2024-11-20 11:06:23.936506] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:34.689 [2024-11-20 11:06:23.936518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:34.689 [2024-11-20 11:06:23.936530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:34.689 [2024-11-20 11:06:23.936540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.689 [2024-11-20 11:06:23.936552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:34.689 [2024-11-20 11:06:23.936561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:34.689 [2024-11-20 11:06:23.936573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:34.689 [2024-11-20 11:06:23.936581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:34.689 [2024-11-20 11:06:23.936609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:34.689 [2024-11-20 11:06:23.936619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.689 [2024-11-20 11:06:23.936647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:34.689 [2024-11-20 11:06:23.936657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:34.689 [2024-11-20 11:06:23.936668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.689 [2024-11-20 11:06:23.936678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:34.689 [2024-11-20 11:06:23.936690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:34.689 [2024-11-20 11:06:23.936699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.689 [2024-11-20 11:06:23.936713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:34.689 [2024-11-20 11:06:23.936723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:34.689 [2024-11-20 11:06:23.936736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.689 [2024-11-20 11:06:23.936762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:34.689 [2024-11-20 11:06:23.936774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:34.689 [2024-11-20 11:06:23.936783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.689 [2024-11-20 11:06:23.936795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:34.689 [2024-11-20 11:06:23.936805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:34.689 [2024-11-20 11:06:23.936816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.689 [2024-11-20 11:06:23.936826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:34.689 [2024-11-20 11:06:23.936837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:34.689 [2024-11-20 11:06:23.936846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.689 [2024-11-20 11:06:23.936857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:34.689 [2024-11-20 11:06:23.936866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:34.689 [2024-11-20 11:06:23.936877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.689 [2024-11-20 11:06:23.936886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:34.689 [2024-11-20 11:06:23.936900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:34.689 [2024-11-20 11:06:23.936909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.689 [2024-11-20 11:06:23.936920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:34.689 [2024-11-20 11:06:23.936929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:34.689 [2024-11-20 11:06:23.936940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.689 [2024-11-20 11:06:23.936949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:34.689 [2024-11-20 11:06:23.936960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:34.690 [2024-11-20 11:06:23.936968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.690 [2024-11-20 11:06:23.936980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:34.690 [2024-11-20 11:06:23.936989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:34.690 [2024-11-20 11:06:23.937000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.690 [2024-11-20 11:06:23.937008] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:34.690 [2024-11-20 11:06:23.937021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:34.690 [2024-11-20 11:06:23.937032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:34.690 [2024-11-20 11:06:23.937045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.690 [2024-11-20 11:06:23.937055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:34.690 [2024-11-20 11:06:23.937069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:34.690 [2024-11-20 11:06:23.937078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:34.690 [2024-11-20 11:06:23.937090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:34.690 [2024-11-20 11:06:23.937098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:34.690 [2024-11-20 11:06:23.937110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:34.690 [2024-11-20 11:06:23.937124] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:34.690 [2024-11-20 11:06:23.937139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:34.690 [2024-11-20 11:06:23.937166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:34.690 [2024-11-20 11:06:23.937200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:34.690 [2024-11-20 11:06:23.937213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:34.690 [2024-11-20 11:06:23.937223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:34.690 [2024-11-20 11:06:23.937236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:34.690 [2024-11-20 11:06:23.937320] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:34.690 [2024-11-20 11:06:23.937333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937344] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:34.690 [2024-11-20 11:06:23.937357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:34.690 [2024-11-20 11:06:23.937367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:34.690 [2024-11-20 11:06:23.937379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:34.690 [2024-11-20 11:06:23.937389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.690 [2024-11-20 11:06:23.937402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:34.690 [2024-11-20 11:06:23.937413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.938 ms 00:29:34.690 [2024-11-20 11:06:23.937425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.690 [2024-11-20 11:06:23.937464] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:34.690 [2024-11-20 11:06:23.937482] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:38.871 [2024-11-20 11:06:27.590543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.871 [2024-11-20 11:06:27.590619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:38.871 [2024-11-20 11:06:27.590652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3659.007 ms 00:29:38.871 [2024-11-20 11:06:27.590666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.871 [2024-11-20 11:06:27.627417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.871 [2024-11-20 11:06:27.627700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:38.871 [2024-11-20 11:06:27.627727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.513 ms 00:29:38.871 [2024-11-20 11:06:27.627741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.871 [2024-11-20 11:06:27.627828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.871 [2024-11-20 11:06:27.627844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:38.871 [2024-11-20 11:06:27.627855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:38.871 [2024-11-20 11:06:27.627871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.871 [2024-11-20 11:06:27.670526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.871 [2024-11-20 11:06:27.670569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:38.871 [2024-11-20 11:06:27.670583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.680 ms 00:29:38.871 [2024-11-20 11:06:27.670628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.871 [2024-11-20 11:06:27.670663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.670680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:38.872 [2024-11-20 11:06:27.670691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:38.872 [2024-11-20 11:06:27.670703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.671207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.671229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:38.872 [2024-11-20 11:06:27.671241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.425 ms 00:29:38.872 [2024-11-20 11:06:27.671254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.671302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.671316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:38.872 [2024-11-20 11:06:27.671329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:38.872 [2024-11-20 11:06:27.671345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.691009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.691049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:38.872 [2024-11-20 11:06:27.691062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.676 ms 00:29:38.872 [2024-11-20 11:06:27.691090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.703091] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:38.872 [2024-11-20 11:06:27.704184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.704212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:38.872 [2024-11-20 11:06:27.704227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.035 ms 00:29:38.872 [2024-11-20 11:06:27.704238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.749996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.750037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:38.872 [2024-11-20 11:06:27.750055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.798 ms 00:29:38.872 [2024-11-20 11:06:27.750065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.750154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.750169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:38.872 [2024-11-20 11:06:27.750184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:29:38.872 [2024-11-20 11:06:27.750194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.784309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.784346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:38.872 [2024-11-20 11:06:27.784363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.115 ms 00:29:38.872 [2024-11-20 11:06:27.784374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.818007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.818054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:38.872 [2024-11-20 11:06:27.818071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.637 ms 00:29:38.872 [2024-11-20 11:06:27.818081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.818836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.818857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:38.872 [2024-11-20 11:06:27.818871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.713 ms 00:29:38.872 [2024-11-20 11:06:27.818881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.917094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.917308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:38.872 [2024-11-20 11:06:27.917339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 98.295 ms 00:29:38.872 [2024-11-20 11:06:27.917351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.954851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.954894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:38.872 [2024-11-20 11:06:27.954937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.452 ms 00:29:38.872 [2024-11-20 11:06:27.954948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:27.990309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:27.990347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:38.872 [2024-11-20 11:06:27.990362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.370 ms 00:29:38.872 [2024-11-20 11:06:27.990388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:28.025989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:28.026036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:38.872 [2024-11-20 11:06:28.026056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.607 ms 00:29:38.872 [2024-11-20 11:06:28.026083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:28.026149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:28.026163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:38.872 [2024-11-20 11:06:28.026182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:38.872 [2024-11-20 11:06:28.026192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:28.026296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-11-20 11:06:28.026308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:38.872 [2024-11-20 11:06:28.026324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:38.872 [2024-11-20 11:06:28.026334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.872 [2024-11-20 11:06:28.027441] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4112.987 ms, result 0 00:29:38.872 { 00:29:38.872 "name": "ftl", 00:29:38.872 "uuid": "b3a5580e-a5cf-4819-83b7-f8caa8f926c3" 00:29:38.872 } 00:29:38.872 11:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:39.130 [2024-11-20 11:06:28.250173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.130 11:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:39.388 11:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:39.646 [2024-11-20 11:06:28.653898] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:39.646 11:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:39.646 [2024-11-20 11:06:28.855182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:39.646 11:06:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:40.211 Fill FTL, iteration 1 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83311 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83311 /var/tmp/spdk.tgt.sock 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83311 ']' 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:40.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.211 11:06:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:40.211 [2024-11-20 11:06:29.332976] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:29:40.211 [2024-11-20 11:06:29.333375] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83311 ] 00:29:40.469 [2024-11-20 11:06:29.515493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.469 [2024-11-20 11:06:29.624291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.402 11:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.402 11:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:41.402 11:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:41.659 ftln1 00:29:41.659 11:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:41.659 11:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:41.916 11:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:41.916 11:06:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83311 00:29:41.916 11:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83311 ']' 00:29:41.916 11:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83311 00:29:41.916 11:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:41.916 11:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.916 11:06:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83311 00:29:41.916 killing process with pid 83311 00:29:41.916 11:06:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:41.916 11:06:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:41.917 11:06:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83311' 00:29:41.917 11:06:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83311 00:29:41.917 11:06:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83311 00:29:44.444 11:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:44.444 11:06:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:44.444 [2024-11-20 11:06:33.347429] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:29:44.444 [2024-11-20 11:06:33.347563] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83366 ] 00:29:44.444 [2024-11-20 11:06:33.533693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.444 [2024-11-20 11:06:33.650688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.352  [2024-11-20T11:06:36.170Z] Copying: 241/1024 [MB] (241 MBps) [2024-11-20T11:06:37.543Z] Copying: 484/1024 [MB] (243 MBps) [2024-11-20T11:06:38.478Z] Copying: 728/1024 [MB] (244 MBps) [2024-11-20T11:06:38.478Z] Copying: 970/1024 [MB] (242 MBps) [2024-11-20T11:06:39.853Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:29:50.600 00:29:50.600 Calculate MD5 checksum, iteration 1 00:29:50.600 11:06:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:50.600 11:06:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:50.600 11:06:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:50.600 11:06:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:50.600 11:06:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:50.600 11:06:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:50.600 11:06:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:50.600 11:06:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:50.600 [2024-11-20 11:06:39.544688] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:29:50.600 [2024-11-20 11:06:39.544997] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83430 ] 00:29:50.600 [2024-11-20 11:06:39.722210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.600 [2024-11-20 11:06:39.835445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.502  [2024-11-20T11:06:41.755Z] Copying: 721/1024 [MB] (721 MBps) [2024-11-20T11:06:42.690Z] Copying: 1024/1024 [MB] (average 717 MBps) 00:29:53.437 00:29:53.437 11:06:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:53.437 11:06:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:55.337 Fill FTL, iteration 2 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7ac4f8eab97cfa05e9f589913700d385 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:55.337 11:06:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:55.337 [2024-11-20 11:06:44.351838] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:29:55.337 [2024-11-20 11:06:44.352644] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83486 ] 00:29:55.337 [2024-11-20 11:06:44.551268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.596 [2024-11-20 11:06:44.660496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.969  [2024-11-20T11:06:47.155Z] Copying: 244/1024 [MB] (244 MBps) [2024-11-20T11:06:48.529Z] Copying: 486/1024 [MB] (242 MBps) [2024-11-20T11:06:49.462Z] Copying: 723/1024 [MB] (237 MBps) [2024-11-20T11:06:49.462Z] Copying: 960/1024 [MB] (237 MBps) [2024-11-20T11:06:50.871Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:30:01.618 00:30:01.618 Calculate MD5 checksum, iteration 2 00:30:01.618 11:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:01.618 11:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:01.618 11:06:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:01.618 11:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:01.618 11:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:01.618 11:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:01.618 11:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:01.618 11:06:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:01.618 [2024-11-20 11:06:50.570767] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:01.618 [2024-11-20 11:06:50.571085] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83550 ] 00:30:01.618 [2024-11-20 11:06:50.751520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.618 [2024-11-20 11:06:50.860595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.512  [2024-11-20T11:06:53.022Z] Copying: 727/1024 [MB] (727 MBps) [2024-11-20T11:06:54.393Z] Copying: 1024/1024 [MB] (average 715 MBps) 00:30:05.140 00:30:05.140 11:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:05.140 11:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:06.515 11:06:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:06.515 11:06:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=1c4cbf70501ed323917280296448796d 00:30:06.515 11:06:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:06.515 11:06:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:06.515 11:06:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:06.773 [2024-11-20 11:06:55.931205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.773 [2024-11-20 11:06:55.931255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:06.773 [2024-11-20 11:06:55.931271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:06.773 [2024-11-20 11:06:55.931297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.773 [2024-11-20 11:06:55.931335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.773 [2024-11-20 11:06:55.931347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:06.773 [2024-11-20 11:06:55.931357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:06.773 [2024-11-20 11:06:55.931372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.773 [2024-11-20 11:06:55.931392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.773 [2024-11-20 11:06:55.931404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:06.773 [2024-11-20 11:06:55.931414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:06.773 [2024-11-20 11:06:55.931424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.773 [2024-11-20 11:06:55.931484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.281 ms, result 0 00:30:06.773 true 00:30:06.773 11:06:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:07.031 { 00:30:07.031 "name": "ftl", 00:30:07.031 "properties": [ 00:30:07.031 { 00:30:07.031 "name": "superblock_version", 00:30:07.031 "value": 5, 00:30:07.031 "read-only": true 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "name": "base_device", 00:30:07.031 "bands": [ 00:30:07.031 { 00:30:07.031 "id": 0, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 1, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 2, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 3, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 4, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 5, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 6, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 7, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 8, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 9, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 10, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 11, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 12, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 13, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 14, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 15, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 16, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 17, 00:30:07.031 "state": "FREE", 00:30:07.031 "validity": 0.0 00:30:07.031 } 00:30:07.031 ], 00:30:07.031 "read-only": true 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "name": "cache_device", 00:30:07.031 "type": "bdev", 00:30:07.031 "chunks": [ 00:30:07.031 { 00:30:07.031 "id": 0, 00:30:07.031 "state": "INACTIVE", 00:30:07.031 "utilization": 0.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 1, 00:30:07.031 "state": "CLOSED", 00:30:07.031 "utilization": 1.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 2, 00:30:07.031 "state": "CLOSED", 00:30:07.031 "utilization": 1.0 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 3, 00:30:07.031 "state": "OPEN", 00:30:07.031 "utilization": 0.001953125 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "id": 4, 00:30:07.031 "state": "OPEN", 00:30:07.031 "utilization": 0.0 00:30:07.031 } 00:30:07.031 ], 00:30:07.031 "read-only": true 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "name": "verbose_mode", 00:30:07.031 "value": true, 00:30:07.031 "unit": "", 00:30:07.031 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:07.031 }, 00:30:07.031 { 00:30:07.031 "name": "prep_upgrade_on_shutdown", 00:30:07.031 "value": false, 00:30:07.031 "unit": "", 00:30:07.031 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:07.031 } 00:30:07.031 ] 00:30:07.031 } 00:30:07.031 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:07.289 [2024-11-20 11:06:56.334877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.289 [2024-11-20 11:06:56.335066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:07.289 [2024-11-20 11:06:56.335200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:07.289 [2024-11-20 11:06:56.335239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.289 [2024-11-20 11:06:56.335299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.289 [2024-11-20 11:06:56.335386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:07.289 [2024-11-20 11:06:56.335422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:07.289 [2024-11-20 11:06:56.335452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.289 [2024-11-20 11:06:56.335541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.289 [2024-11-20 11:06:56.335577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:07.289 [2024-11-20 11:06:56.335678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:07.289 [2024-11-20 11:06:56.335715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.289 [2024-11-20 11:06:56.335791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.898 ms, result 0 00:30:07.289 true 00:30:07.289 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:07.289 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:07.289 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:07.547 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:07.547 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:07.547 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:07.547 [2024-11-20 11:06:56.748512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.547 [2024-11-20 11:06:56.748561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:07.547 [2024-11-20 11:06:56.748578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:07.547 [2024-11-20 11:06:56.748588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.547 [2024-11-20 11:06:56.748626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.547 [2024-11-20 11:06:56.748637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:07.547 [2024-11-20 11:06:56.748648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:07.547 [2024-11-20 11:06:56.748657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.547 [2024-11-20 11:06:56.748677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.547 [2024-11-20 11:06:56.748687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:07.547 [2024-11-20 11:06:56.748697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:07.547 [2024-11-20 11:06:56.748706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.547 [2024-11-20 11:06:56.748763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.240 ms, result 0 00:30:07.547 true 00:30:07.547 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:07.805 { 00:30:07.805 "name": "ftl", 00:30:07.805 "properties": [ 00:30:07.805 { 00:30:07.805 "name": "superblock_version", 00:30:07.805 "value": 5, 00:30:07.805 "read-only": true 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "name": "base_device", 00:30:07.805 "bands": [ 00:30:07.805 { 00:30:07.805 "id": 0, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 1, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 2, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 3, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 4, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 5, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 6, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 7, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 8, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 9, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 10, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 11, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 12, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 13, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 14, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 15, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 16, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.805 }, 00:30:07.805 { 00:30:07.805 "id": 17, 00:30:07.805 "state": "FREE", 00:30:07.805 "validity": 0.0 00:30:07.806 } 00:30:07.806 ], 00:30:07.806 "read-only": true 00:30:07.806 }, 00:30:07.806 { 00:30:07.806 "name": "cache_device", 00:30:07.806 "type": "bdev", 00:30:07.806 "chunks": [ 00:30:07.806 { 00:30:07.806 "id": 0, 00:30:07.806 "state": "INACTIVE", 00:30:07.806 "utilization": 0.0 00:30:07.806 }, 00:30:07.806 { 00:30:07.806 "id": 1, 00:30:07.806 "state": "CLOSED", 00:30:07.806 "utilization": 1.0 00:30:07.806 }, 00:30:07.806 { 00:30:07.806 "id": 2, 00:30:07.806 "state": "CLOSED", 00:30:07.806 "utilization": 1.0 00:30:07.806 }, 00:30:07.806 { 00:30:07.806 "id": 3, 00:30:07.806 "state": "OPEN", 00:30:07.806 "utilization": 0.001953125 00:30:07.806 }, 00:30:07.806 { 00:30:07.806 "id": 4, 00:30:07.806 "state": "OPEN", 00:30:07.806 "utilization": 0.0 00:30:07.806 } 00:30:07.806 ], 00:30:07.806 "read-only": true 00:30:07.806 }, 00:30:07.806 { 00:30:07.806 "name": "verbose_mode", 00:30:07.806 "value": true, 00:30:07.806 "unit": "", 00:30:07.806 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:07.806 }, 00:30:07.806 { 00:30:07.806 "name": "prep_upgrade_on_shutdown", 00:30:07.806 "value": true, 00:30:07.806 "unit": "", 00:30:07.806 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:07.806 } 00:30:07.806 ] 00:30:07.806 } 00:30:07.806 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:07.806 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83184 ]] 00:30:07.806 11:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83184 00:30:07.806 11:06:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83184 ']' 00:30:07.806 11:06:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83184 00:30:07.806 11:06:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:07.806 11:06:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.806 11:06:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83184 00:30:07.806 11:06:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:07.806 11:06:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:07.806 killing process with pid 83184 00:30:07.806 11:06:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83184' 00:30:07.806 11:06:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83184 00:30:07.806 11:06:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83184 00:30:09.179 [2024-11-20 11:06:58.085220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:09.179 [2024-11-20 11:06:58.105012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.179 [2024-11-20 11:06:58.105052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:09.179 [2024-11-20 11:06:58.105066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:09.179 [2024-11-20 11:06:58.105076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.179 [2024-11-20 11:06:58.105097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:09.179 [2024-11-20 11:06:58.109201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.179 [2024-11-20 11:06:58.109231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:09.179 [2024-11-20 11:06:58.109243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.095 ms 00:30:09.179 [2024-11-20 11:06:58.109252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.317064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.317115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:17.304 [2024-11-20 11:07:05.317131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7219.488 ms 00:30:17.304 [2024-11-20 11:07:05.317147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.318413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.318444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:17.304 [2024-11-20 11:07:05.318456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.249 ms 00:30:17.304 [2024-11-20 11:07:05.318466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.319393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.319423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:17.304 [2024-11-20 11:07:05.319436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:30:17.304 [2024-11-20 11:07:05.319446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.334339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.334377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:17.304 [2024-11-20 11:07:05.334390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.868 ms 00:30:17.304 [2024-11-20 11:07:05.334416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.343872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.343908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:17.304 [2024-11-20 11:07:05.343921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.449 ms 00:30:17.304 [2024-11-20 11:07:05.343947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.344026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.344039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:17.304 [2024-11-20 11:07:05.344055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:30:17.304 [2024-11-20 11:07:05.344065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.358311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.358484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:17.304 [2024-11-20 11:07:05.358515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.252 ms 00:30:17.304 [2024-11-20 11:07:05.358525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.372941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.373107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:17.304 [2024-11-20 11:07:05.373127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.414 ms 00:30:17.304 [2024-11-20 11:07:05.373137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.387551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.387606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:17.304 [2024-11-20 11:07:05.387618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.412 ms 00:30:17.304 [2024-11-20 11:07:05.387627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.401978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.402011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:17.304 [2024-11-20 11:07:05.402024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.284 ms 00:30:17.304 [2024-11-20 11:07:05.402033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.402052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:17.304 [2024-11-20 11:07:05.402067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:17.304 [2024-11-20 11:07:05.402080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:17.304 [2024-11-20 11:07:05.402103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:17.304 [2024-11-20 11:07:05.402114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:17.304 [2024-11-20 11:07:05.402272] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:17.304 [2024-11-20 11:07:05.402282] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b3a5580e-a5cf-4819-83b7-f8caa8f926c3 00:30:17.304 [2024-11-20 11:07:05.402293] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:17.304 [2024-11-20 11:07:05.402303] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:17.304 [2024-11-20 11:07:05.402313] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:17.304 [2024-11-20 11:07:05.402323] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:17.304 [2024-11-20 11:07:05.402333] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:17.304 [2024-11-20 11:07:05.402347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:17.304 [2024-11-20 11:07:05.402356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:17.304 [2024-11-20 11:07:05.402365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:17.304 [2024-11-20 11:07:05.402375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:17.304 [2024-11-20 11:07:05.402385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.402399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:17.304 [2024-11-20 11:07:05.402409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.334 ms 00:30:17.304 [2024-11-20 11:07:05.402419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.422792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.422823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:17.304 [2024-11-20 11:07:05.422835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.376 ms 00:30:17.304 [2024-11-20 11:07:05.422851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.423394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.304 [2024-11-20 11:07:05.423404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:17.304 [2024-11-20 11:07:05.423415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:30:17.304 [2024-11-20 11:07:05.423425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.488365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.488402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:17.304 [2024-11-20 11:07:05.488421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.488431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.488465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.488475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:17.304 [2024-11-20 11:07:05.488486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.488495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.488568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.488581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:17.304 [2024-11-20 11:07:05.488616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.488627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.488666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.488677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:17.304 [2024-11-20 11:07:05.488687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.488697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.611336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.611381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:17.304 [2024-11-20 11:07:05.611396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.611412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.705981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.706168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:17.304 [2024-11-20 11:07:05.706206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.706217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.706320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.706346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:17.304 [2024-11-20 11:07:05.706357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.706368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.706420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.706432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:17.304 [2024-11-20 11:07:05.706443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.706453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.706580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.706615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:17.304 [2024-11-20 11:07:05.706627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.706637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.706679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.706696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:17.304 [2024-11-20 11:07:05.706706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.706716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.706755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.706766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:17.304 [2024-11-20 11:07:05.706777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.706786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.706836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.304 [2024-11-20 11:07:05.706849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:17.304 [2024-11-20 11:07:05.706859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.304 [2024-11-20 11:07:05.706870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.304 [2024-11-20 11:07:05.706984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7614.288 ms, result 0 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83743 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83743 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83743 ']' 00:30:22.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:22.617 11:07:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:22.617 [2024-11-20 11:07:11.287910] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:22.617 [2024-11-20 11:07:11.288166] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83743 ] 00:30:22.617 [2024-11-20 11:07:11.468732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.617 [2024-11-20 11:07:11.571441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.556 [2024-11-20 11:07:12.503738] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:23.556 [2024-11-20 11:07:12.503800] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:23.556 [2024-11-20 11:07:12.650083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.650128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:23.556 [2024-11-20 11:07:12.650143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:23.556 [2024-11-20 11:07:12.650153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.650202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.650214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:23.556 [2024-11-20 11:07:12.650224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:23.556 [2024-11-20 11:07:12.650233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.650260] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:23.556 [2024-11-20 11:07:12.651265] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:23.556 [2024-11-20 11:07:12.651301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.651313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:23.556 [2024-11-20 11:07:12.651323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.053 ms 00:30:23.556 [2024-11-20 11:07:12.651333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.652759] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:23.556 [2024-11-20 11:07:12.670673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.670720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:23.556 [2024-11-20 11:07:12.670740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.944 ms 00:30:23.556 [2024-11-20 11:07:12.670751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.670810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.670822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:23.556 [2024-11-20 11:07:12.670832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:23.556 [2024-11-20 11:07:12.670852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.677495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.677655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:23.556 [2024-11-20 11:07:12.677693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.579 ms 00:30:23.556 [2024-11-20 11:07:12.677703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.677770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.677784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:23.556 [2024-11-20 11:07:12.677794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:30:23.556 [2024-11-20 11:07:12.677804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.677849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.677860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:23.556 [2024-11-20 11:07:12.677874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:23.556 [2024-11-20 11:07:12.677885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.677909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:23.556 [2024-11-20 11:07:12.682586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.682625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:23.556 [2024-11-20 11:07:12.682638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.689 ms 00:30:23.556 [2024-11-20 11:07:12.682652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.682678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.556 [2024-11-20 11:07:12.682688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:23.556 [2024-11-20 11:07:12.682698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:23.556 [2024-11-20 11:07:12.682707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.556 [2024-11-20 11:07:12.682762] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:23.556 [2024-11-20 11:07:12.682785] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:23.556 [2024-11-20 11:07:12.682822] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:23.556 [2024-11-20 11:07:12.682839] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:23.556 [2024-11-20 11:07:12.682925] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:23.556 [2024-11-20 11:07:12.682937] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:23.556 [2024-11-20 11:07:12.682950] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:23.557 [2024-11-20 11:07:12.682963] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:23.557 [2024-11-20 11:07:12.682974] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:23.557 [2024-11-20 11:07:12.682989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:23.557 [2024-11-20 11:07:12.682999] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:23.557 [2024-11-20 11:07:12.683008] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:23.557 [2024-11-20 11:07:12.683018] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:23.557 [2024-11-20 11:07:12.683028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.557 [2024-11-20 11:07:12.683038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:23.557 [2024-11-20 11:07:12.683048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:30:23.557 [2024-11-20 11:07:12.683057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.557 [2024-11-20 11:07:12.683127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.557 [2024-11-20 11:07:12.683138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:23.557 [2024-11-20 11:07:12.683148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:23.557 [2024-11-20 11:07:12.683161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.557 [2024-11-20 11:07:12.683248] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:23.557 [2024-11-20 11:07:12.683260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:23.557 [2024-11-20 11:07:12.683270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:23.557 [2024-11-20 11:07:12.683279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:23.557 [2024-11-20 11:07:12.683298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:23.557 [2024-11-20 11:07:12.683317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:23.557 [2024-11-20 11:07:12.683328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:23.557 [2024-11-20 11:07:12.683336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:23.557 [2024-11-20 11:07:12.683354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:23.557 [2024-11-20 11:07:12.683363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:23.557 [2024-11-20 11:07:12.683382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:23.557 [2024-11-20 11:07:12.683390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:23.557 [2024-11-20 11:07:12.683409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:23.557 [2024-11-20 11:07:12.683417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:23.557 [2024-11-20 11:07:12.683435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:23.557 [2024-11-20 11:07:12.683444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.557 [2024-11-20 11:07:12.683453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:23.557 [2024-11-20 11:07:12.683461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:23.557 [2024-11-20 11:07:12.683470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.557 [2024-11-20 11:07:12.683490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:23.557 [2024-11-20 11:07:12.683499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:23.557 [2024-11-20 11:07:12.683508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.557 [2024-11-20 11:07:12.683517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:23.557 [2024-11-20 11:07:12.683526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:23.557 [2024-11-20 11:07:12.683535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.557 [2024-11-20 11:07:12.683544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:23.557 [2024-11-20 11:07:12.683554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:23.557 [2024-11-20 11:07:12.683562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:23.557 [2024-11-20 11:07:12.683580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:23.557 [2024-11-20 11:07:12.683589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:23.557 [2024-11-20 11:07:12.683619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:23.557 [2024-11-20 11:07:12.683646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:23.557 [2024-11-20 11:07:12.683654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683663] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:23.557 [2024-11-20 11:07:12.683673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:23.557 [2024-11-20 11:07:12.683683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:23.557 [2024-11-20 11:07:12.683693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.557 [2024-11-20 11:07:12.683706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:23.557 [2024-11-20 11:07:12.683717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:23.557 [2024-11-20 11:07:12.683726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:23.557 [2024-11-20 11:07:12.683746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:23.557 [2024-11-20 11:07:12.683755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:23.557 [2024-11-20 11:07:12.683764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:23.557 [2024-11-20 11:07:12.683773] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:23.557 [2024-11-20 11:07:12.683785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:23.557 [2024-11-20 11:07:12.683806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:23.557 [2024-11-20 11:07:12.683834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:23.557 [2024-11-20 11:07:12.683843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:23.557 [2024-11-20 11:07:12.683853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:23.557 [2024-11-20 11:07:12.683862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:23.557 [2024-11-20 11:07:12.683927] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:23.557 [2024-11-20 11:07:12.683938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:23.557 [2024-11-20 11:07:12.683957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:23.557 [2024-11-20 11:07:12.683967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:23.558 [2024-11-20 11:07:12.683976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:23.558 [2024-11-20 11:07:12.683986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.558 [2024-11-20 11:07:12.683995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:23.558 [2024-11-20 11:07:12.684004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.792 ms 00:30:23.558 [2024-11-20 11:07:12.684014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.558 [2024-11-20 11:07:12.684054] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:23.558 [2024-11-20 11:07:12.684070] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:27.749 [2024-11-20 11:07:16.388945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.389155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:27.749 [2024-11-20 11:07:16.389274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3710.903 ms 00:30:27.749 [2024-11-20 11:07:16.389314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.426150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.426313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:27.749 [2024-11-20 11:07:16.426466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.569 ms 00:30:27.749 [2024-11-20 11:07:16.426512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.426638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.426747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:27.749 [2024-11-20 11:07:16.426786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:27.749 [2024-11-20 11:07:16.426816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.469540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.469696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:27.749 [2024-11-20 11:07:16.469783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.668 ms 00:30:27.749 [2024-11-20 11:07:16.469827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.469886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.469919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.749 [2024-11-20 11:07:16.469950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:27.749 [2024-11-20 11:07:16.469980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.470488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.470546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.749 [2024-11-20 11:07:16.470579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.412 ms 00:30:27.749 [2024-11-20 11:07:16.470631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.470706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.470826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.749 [2024-11-20 11:07:16.470857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:27.749 [2024-11-20 11:07:16.470887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.491767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.491897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.749 [2024-11-20 11:07:16.492039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.810 ms 00:30:27.749 [2024-11-20 11:07:16.492077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.510316] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:27.749 [2024-11-20 11:07:16.510489] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:27.749 [2024-11-20 11:07:16.510648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.510684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:27.749 [2024-11-20 11:07:16.510717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.462 ms 00:30:27.749 [2024-11-20 11:07:16.510746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.529443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.529573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:27.749 [2024-11-20 11:07:16.529676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.665 ms 00:30:27.749 [2024-11-20 11:07:16.529714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.547274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.547402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:27.749 [2024-11-20 11:07:16.547488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.523 ms 00:30:27.749 [2024-11-20 11:07:16.547523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.564987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.565131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:27.749 [2024-11-20 11:07:16.565205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.436 ms 00:30:27.749 [2024-11-20 11:07:16.565239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.566105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.566229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:27.749 [2024-11-20 11:07:16.566303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.689 ms 00:30:27.749 [2024-11-20 11:07:16.566401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.661057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.661287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:27.749 [2024-11-20 11:07:16.661327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 94.753 ms 00:30:27.749 [2024-11-20 11:07:16.661339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.671605] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:27.749 [2024-11-20 11:07:16.672296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.672326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:27.749 [2024-11-20 11:07:16.672338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.868 ms 00:30:27.749 [2024-11-20 11:07:16.672349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.672423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.672439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:27.749 [2024-11-20 11:07:16.672450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:27.749 [2024-11-20 11:07:16.672461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.749 [2024-11-20 11:07:16.672523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.749 [2024-11-20 11:07:16.672535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:27.749 [2024-11-20 11:07:16.672546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:27.750 [2024-11-20 11:07:16.672556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.750 [2024-11-20 11:07:16.672579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.750 [2024-11-20 11:07:16.672590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:27.750 [2024-11-20 11:07:16.672622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:27.750 [2024-11-20 11:07:16.672636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.750 [2024-11-20 11:07:16.672673] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:27.750 [2024-11-20 11:07:16.672686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.750 [2024-11-20 11:07:16.672697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:27.750 [2024-11-20 11:07:16.672707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:27.750 [2024-11-20 11:07:16.672716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.750 [2024-11-20 11:07:16.706816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.750 [2024-11-20 11:07:16.706854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:27.750 [2024-11-20 11:07:16.706867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.133 ms 00:30:27.750 [2024-11-20 11:07:16.706892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.750 [2024-11-20 11:07:16.706967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.750 [2024-11-20 11:07:16.706980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:27.750 [2024-11-20 11:07:16.706991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:27.750 [2024-11-20 11:07:16.707000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.750 [2024-11-20 11:07:16.708067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4064.125 ms, result 0 00:30:27.750 [2024-11-20 11:07:16.723141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.750 [2024-11-20 11:07:16.739115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:27.750 [2024-11-20 11:07:16.747718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:28.317 11:07:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.317 11:07:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:28.317 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:28.317 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:28.317 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:28.317 [2024-11-20 11:07:17.455025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.317 [2024-11-20 11:07:17.455073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:28.317 [2024-11-20 11:07:17.455090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:28.317 [2024-11-20 11:07:17.455104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.317 [2024-11-20 11:07:17.455131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.317 [2024-11-20 11:07:17.455142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:28.317 [2024-11-20 11:07:17.455153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:28.317 [2024-11-20 11:07:17.455163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.317 [2024-11-20 11:07:17.455183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.317 [2024-11-20 11:07:17.455194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:28.317 [2024-11-20 11:07:17.455204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:28.317 [2024-11-20 11:07:17.455214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.317 [2024-11-20 11:07:17.455272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.239 ms, result 0 00:30:28.317 true 00:30:28.317 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.576 { 00:30:28.576 "name": "ftl", 00:30:28.576 "properties": [ 00:30:28.576 { 00:30:28.576 "name": "superblock_version", 00:30:28.576 "value": 5, 00:30:28.576 "read-only": true 00:30:28.576 }, 00:30:28.576 { 00:30:28.576 "name": "base_device", 00:30:28.576 "bands": [ 00:30:28.576 { 00:30:28.576 "id": 0, 00:30:28.576 "state": "CLOSED", 00:30:28.576 "validity": 1.0 00:30:28.576 }, 00:30:28.576 { 00:30:28.576 "id": 1, 00:30:28.576 "state": "CLOSED", 00:30:28.576 "validity": 1.0 00:30:28.576 }, 00:30:28.576 { 00:30:28.576 "id": 2, 00:30:28.576 "state": "CLOSED", 00:30:28.576 "validity": 0.007843137254901933 00:30:28.576 }, 00:30:28.576 { 00:30:28.576 "id": 3, 00:30:28.576 "state": "FREE", 00:30:28.576 "validity": 0.0 00:30:28.576 }, 00:30:28.576 { 00:30:28.576 "id": 4, 00:30:28.576 "state": "FREE", 00:30:28.576 "validity": 0.0 00:30:28.576 }, 00:30:28.576 { 00:30:28.576 "id": 5, 00:30:28.576 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 6, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 7, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 8, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 9, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 10, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 11, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 12, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 13, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 14, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 15, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 16, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 17, 00:30:28.577 "state": "FREE", 00:30:28.577 "validity": 0.0 00:30:28.577 } 00:30:28.577 ], 00:30:28.577 "read-only": true 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "name": "cache_device", 00:30:28.577 "type": "bdev", 00:30:28.577 "chunks": [ 00:30:28.577 { 00:30:28.577 "id": 0, 00:30:28.577 "state": "INACTIVE", 00:30:28.577 "utilization": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 1, 00:30:28.577 "state": "OPEN", 00:30:28.577 "utilization": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 2, 00:30:28.577 "state": "OPEN", 00:30:28.577 "utilization": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 3, 00:30:28.577 "state": "FREE", 00:30:28.577 "utilization": 0.0 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "id": 4, 00:30:28.577 "state": "FREE", 00:30:28.577 "utilization": 0.0 00:30:28.577 } 00:30:28.577 ], 00:30:28.577 "read-only": true 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "name": "verbose_mode", 00:30:28.577 "value": true, 00:30:28.577 "unit": "", 00:30:28.577 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:28.577 }, 00:30:28.577 { 00:30:28.577 "name": "prep_upgrade_on_shutdown", 00:30:28.577 "value": false, 00:30:28.577 "unit": "", 00:30:28.577 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:28.577 } 00:30:28.577 ] 00:30:28.577 } 00:30:28.577 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:28.577 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.577 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:28.837 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:28.837 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:28.837 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:28.837 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.837 11:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:28.837 Validate MD5 checksum, iteration 1 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:28.837 11:07:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:29.096 [2024-11-20 11:07:18.156128] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:29.097 [2024-11-20 11:07:18.156255] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83827 ] 00:30:29.097 [2024-11-20 11:07:18.335036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.355 [2024-11-20 11:07:18.441613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.260  [2024-11-20T11:07:20.771Z] Copying: 709/1024 [MB] (709 MBps) [2024-11-20T11:07:22.146Z] Copying: 1024/1024 [MB] (average 704 MBps) 00:30:32.893 00:30:32.893 11:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:32.894 11:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:34.863 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:34.863 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7ac4f8eab97cfa05e9f589913700d385 00:30:34.863 Validate MD5 checksum, iteration 2 00:30:34.863 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7ac4f8eab97cfa05e9f589913700d385 != \7\a\c\4\f\8\e\a\b\9\7\c\f\a\0\5\e\9\f\5\8\9\9\1\3\7\0\0\d\3\8\5 ]] 00:30:34.863 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:34.863 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:34.863 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:34.863 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:34.863 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:34.864 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:34.864 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:34.864 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:34.864 11:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:34.864 [2024-11-20 11:07:23.722134] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:34.864 [2024-11-20 11:07:23.722245] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83893 ] 00:30:34.864 [2024-11-20 11:07:23.904313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.864 [2024-11-20 11:07:24.009430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.767  [2024-11-20T11:07:26.278Z] Copying: 706/1024 [MB] (706 MBps) [2024-11-20T11:07:29.562Z] Copying: 1024/1024 [MB] (average 708 MBps) 00:30:40.309 00:30:40.309 11:07:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:40.309 11:07:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1c4cbf70501ed323917280296448796d 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1c4cbf70501ed323917280296448796d != \1\c\4\c\b\f\7\0\5\0\1\e\d\3\2\3\9\1\7\2\8\0\2\9\6\4\4\8\7\9\6\d ]] 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83743 ]] 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83743 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83969 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83969 00:30:41.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83969 ']' 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.684 11:07:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:41.684 [2024-11-20 11:07:30.840649] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:41.684 [2024-11-20 11:07:30.840961] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83969 ] 00:30:41.943 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83743 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:41.943 [2024-11-20 11:07:31.021337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.943 [2024-11-20 11:07:31.135118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.879 [2024-11-20 11:07:32.035385] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:42.879 [2024-11-20 11:07:32.035452] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:43.138 [2024-11-20 11:07:32.181035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.181077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:43.138 [2024-11-20 11:07:32.181093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:43.138 [2024-11-20 11:07:32.181119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.181170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.181182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:43.138 [2024-11-20 11:07:32.181192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:43.138 [2024-11-20 11:07:32.181201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.181230] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:43.138 [2024-11-20 11:07:32.182264] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:43.138 [2024-11-20 11:07:32.182297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.182308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:43.138 [2024-11-20 11:07:32.182320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.079 ms 00:30:43.138 [2024-11-20 11:07:32.182330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.182698] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:43.138 [2024-11-20 11:07:32.205502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.205540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:43.138 [2024-11-20 11:07:32.205554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.841 ms 00:30:43.138 [2024-11-20 11:07:32.205579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.219136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.219281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:43.138 [2024-11-20 11:07:32.219321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:43.138 [2024-11-20 11:07:32.219331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.219834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.219849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:43.138 [2024-11-20 11:07:32.219861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:30:43.138 [2024-11-20 11:07:32.219871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.219926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.219942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:43.138 [2024-11-20 11:07:32.219952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:43.138 [2024-11-20 11:07:32.219962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.219989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.220000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:43.138 [2024-11-20 11:07:32.220010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:43.138 [2024-11-20 11:07:32.220020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.220041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:43.138 [2024-11-20 11:07:32.224301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.224330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:43.138 [2024-11-20 11:07:32.224342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.272 ms 00:30:43.138 [2024-11-20 11:07:32.224353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.224387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.138 [2024-11-20 11:07:32.224399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:43.138 [2024-11-20 11:07:32.224410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:43.138 [2024-11-20 11:07:32.224420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.138 [2024-11-20 11:07:32.224457] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:43.138 [2024-11-20 11:07:32.224480] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:43.138 [2024-11-20 11:07:32.224532] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:43.138 [2024-11-20 11:07:32.224564] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:43.138 [2024-11-20 11:07:32.224666] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:43.138 [2024-11-20 11:07:32.224680] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:43.138 [2024-11-20 11:07:32.224693] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:43.138 [2024-11-20 11:07:32.224706] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:43.138 [2024-11-20 11:07:32.224718] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:43.138 [2024-11-20 11:07:32.224730] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:43.138 [2024-11-20 11:07:32.224740] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:43.138 [2024-11-20 11:07:32.224751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:43.138 [2024-11-20 11:07:32.224762] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:43.139 [2024-11-20 11:07:32.224784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.224813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:43.139 [2024-11-20 11:07:32.224824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.330 ms 00:30:43.139 [2024-11-20 11:07:32.224834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.224910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.224921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:43.139 [2024-11-20 11:07:32.224943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:30:43.139 [2024-11-20 11:07:32.224952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.225039] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:43.139 [2024-11-20 11:07:32.225051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:43.139 [2024-11-20 11:07:32.225064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:43.139 [2024-11-20 11:07:32.225074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:43.139 [2024-11-20 11:07:32.225097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:43.139 [2024-11-20 11:07:32.225116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:43.139 [2024-11-20 11:07:32.225126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:43.139 [2024-11-20 11:07:32.225135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:43.139 [2024-11-20 11:07:32.225154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:43.139 [2024-11-20 11:07:32.225163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:43.139 [2024-11-20 11:07:32.225181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:43.139 [2024-11-20 11:07:32.225190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:43.139 [2024-11-20 11:07:32.225208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:43.139 [2024-11-20 11:07:32.225217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:43.139 [2024-11-20 11:07:32.225235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:43.139 [2024-11-20 11:07:32.225244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.139 [2024-11-20 11:07:32.225254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:43.139 [2024-11-20 11:07:32.225273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:43.139 [2024-11-20 11:07:32.225282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.139 [2024-11-20 11:07:32.225291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:43.139 [2024-11-20 11:07:32.225300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:43.139 [2024-11-20 11:07:32.225309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.139 [2024-11-20 11:07:32.225319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:43.139 [2024-11-20 11:07:32.225328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:43.139 [2024-11-20 11:07:32.225336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.139 [2024-11-20 11:07:32.225345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:43.139 [2024-11-20 11:07:32.225354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:43.139 [2024-11-20 11:07:32.225363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:43.139 [2024-11-20 11:07:32.225381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:43.139 [2024-11-20 11:07:32.225391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:43.139 [2024-11-20 11:07:32.225410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:43.139 [2024-11-20 11:07:32.225437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:43.139 [2024-11-20 11:07:32.225445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225454] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:43.139 [2024-11-20 11:07:32.225465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:43.139 [2024-11-20 11:07:32.225474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:43.139 [2024-11-20 11:07:32.225484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.139 [2024-11-20 11:07:32.225493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:43.139 [2024-11-20 11:07:32.225503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:43.139 [2024-11-20 11:07:32.225512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:43.139 [2024-11-20 11:07:32.225521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:43.139 [2024-11-20 11:07:32.225530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:43.139 [2024-11-20 11:07:32.225540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:43.139 [2024-11-20 11:07:32.225550] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:43.139 [2024-11-20 11:07:32.225562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.225574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:43.139 [2024-11-20 11:07:32.225584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.225594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.225605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:43.139 [2024-11-20 11:07:32.225615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:43.139 [2024-11-20 11:07:32.225625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:43.139 [2024-11-20 11:07:32.226046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:43.139 [2024-11-20 11:07:32.226112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.226177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.226228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.226277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.226326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.226375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.226425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:43.139 [2024-11-20 11:07:32.226475] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:43.139 [2024-11-20 11:07:32.226973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.227235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:43.139 [2024-11-20 11:07:32.227247] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:43.139 [2024-11-20 11:07:32.227258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:43.139 [2024-11-20 11:07:32.227268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:43.139 [2024-11-20 11:07:32.227281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.227297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:43.139 [2024-11-20 11:07:32.227309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.299 ms 00:30:43.139 [2024-11-20 11:07:32.227319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.262387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.262425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:43.139 [2024-11-20 11:07:32.262439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.065 ms 00:30:43.139 [2024-11-20 11:07:32.262450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.262491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.262510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:43.139 [2024-11-20 11:07:32.262522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:43.139 [2024-11-20 11:07:32.262532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.308449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.308486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:43.139 [2024-11-20 11:07:32.308499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.932 ms 00:30:43.139 [2024-11-20 11:07:32.308525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.308564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.308575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:43.139 [2024-11-20 11:07:32.308586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:43.139 [2024-11-20 11:07:32.308596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.308902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.308947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:43.139 [2024-11-20 11:07:32.308979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:30:43.139 [2024-11-20 11:07:32.308992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.309036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.309047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:43.139 [2024-11-20 11:07:32.309058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:43.139 [2024-11-20 11:07:32.309068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.329212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.329246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:43.139 [2024-11-20 11:07:32.329260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.149 ms 00:30:43.139 [2024-11-20 11:07:32.329270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.329388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.329403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:43.139 [2024-11-20 11:07:32.329415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:43.139 [2024-11-20 11:07:32.329424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.366334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.366373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:43.139 [2024-11-20 11:07:32.366387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.950 ms 00:30:43.139 [2024-11-20 11:07:32.366398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.139 [2024-11-20 11:07:32.380358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.139 [2024-11-20 11:07:32.380512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:43.139 [2024-11-20 11:07:32.380543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.620 ms 00:30:43.139 [2024-11-20 11:07:32.380553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.399 [2024-11-20 11:07:32.462860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.399 [2024-11-20 11:07:32.463051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:43.399 [2024-11-20 11:07:32.463084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 82.349 ms 00:30:43.399 [2024-11-20 11:07:32.463096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.399 [2024-11-20 11:07:32.463337] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:43.399 [2024-11-20 11:07:32.463447] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:43.399 [2024-11-20 11:07:32.463560] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:43.399 [2024-11-20 11:07:32.463683] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:43.399 [2024-11-20 11:07:32.463697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.399 [2024-11-20 11:07:32.463708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:43.399 [2024-11-20 11:07:32.463719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.457 ms 00:30:43.399 [2024-11-20 11:07:32.463729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.399 [2024-11-20 11:07:32.463820] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:43.399 [2024-11-20 11:07:32.463835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.399 [2024-11-20 11:07:32.463848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:43.399 [2024-11-20 11:07:32.463859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:43.399 [2024-11-20 11:07:32.463869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.399 [2024-11-20 11:07:32.485783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.399 [2024-11-20 11:07:32.485953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:43.399 [2024-11-20 11:07:32.485975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.924 ms 00:30:43.399 [2024-11-20 11:07:32.485986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.399 [2024-11-20 11:07:32.499660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.399 [2024-11-20 11:07:32.499696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:43.399 [2024-11-20 11:07:32.499709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:43.399 [2024-11-20 11:07:32.499719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.399 [2024-11-20 11:07:32.499809] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:43.399 [2024-11-20 11:07:32.499995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.399 [2024-11-20 11:07:32.500009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:43.399 [2024-11-20 11:07:32.500020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:30:43.399 [2024-11-20 11:07:32.500030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.967 [2024-11-20 11:07:33.112992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.967 [2024-11-20 11:07:33.113061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:43.967 [2024-11-20 11:07:33.113080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 612.773 ms 00:30:43.967 [2024-11-20 11:07:33.113091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.967 [2024-11-20 11:07:33.118608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.967 [2024-11-20 11:07:33.118649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:43.967 [2024-11-20 11:07:33.118663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.976 ms 00:30:43.967 [2024-11-20 11:07:33.118673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.967 [2024-11-20 11:07:33.119188] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:43.967 [2024-11-20 11:07:33.119211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.967 [2024-11-20 11:07:33.119222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:43.967 [2024-11-20 11:07:33.119234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.501 ms 00:30:43.967 [2024-11-20 11:07:33.119244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.967 [2024-11-20 11:07:33.119274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.967 [2024-11-20 11:07:33.119286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:43.967 [2024-11-20 11:07:33.119297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:43.967 [2024-11-20 11:07:33.119307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.967 [2024-11-20 11:07:33.119348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 620.544 ms, result 0 00:30:43.967 [2024-11-20 11:07:33.119389] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:43.967 [2024-11-20 11:07:33.119462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.967 [2024-11-20 11:07:33.119472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:43.967 [2024-11-20 11:07:33.119482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:30:43.967 [2024-11-20 11:07:33.119491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.729756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.729933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:44.537 [2024-11-20 11:07:33.730022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 609.940 ms 00:30:44.537 [2024-11-20 11:07:33.730060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.735886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.736029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:44.537 [2024-11-20 11:07:33.736112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.031 ms 00:30:44.537 [2024-11-20 11:07:33.736148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.736728] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:44.537 [2024-11-20 11:07:33.736877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.736954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:44.537 [2024-11-20 11:07:33.736990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.673 ms 00:30:44.537 [2024-11-20 11:07:33.737019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.737171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.737213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:44.537 [2024-11-20 11:07:33.737245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:44.537 [2024-11-20 11:07:33.737274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.737387] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 618.994 ms, result 0 00:30:44.537 [2024-11-20 11:07:33.737473] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:44.537 [2024-11-20 11:07:33.737578] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:44.537 [2024-11-20 11:07:33.737652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.737683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:44.537 [2024-11-20 11:07:33.737753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1239.869 ms 00:30:44.537 [2024-11-20 11:07:33.737838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.737899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.737972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:44.537 [2024-11-20 11:07:33.738066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:44.537 [2024-11-20 11:07:33.738101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.749689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:44.537 [2024-11-20 11:07:33.749942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.749986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:44.537 [2024-11-20 11:07:33.750108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.790 ms 00:30:44.537 [2024-11-20 11:07:33.750144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.750778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.750889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:44.537 [2024-11-20 11:07:33.750969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:30:44.537 [2024-11-20 11:07:33.751004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.753040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.753152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:44.537 [2024-11-20 11:07:33.753222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.994 ms 00:30:44.537 [2024-11-20 11:07:33.753237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.753301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.753316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:44.537 [2024-11-20 11:07:33.753327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:44.537 [2024-11-20 11:07:33.753343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.753441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.753453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:44.537 [2024-11-20 11:07:33.753464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:44.537 [2024-11-20 11:07:33.753474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.753495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.753505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:44.537 [2024-11-20 11:07:33.753516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:44.537 [2024-11-20 11:07:33.753525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.753558] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:44.537 [2024-11-20 11:07:33.753573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.753583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:44.537 [2024-11-20 11:07:33.753607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:44.537 [2024-11-20 11:07:33.753617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.753668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.537 [2024-11-20 11:07:33.753679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:44.537 [2024-11-20 11:07:33.753689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:44.537 [2024-11-20 11:07:33.753699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.537 [2024-11-20 11:07:33.754605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1575.702 ms, result 0 00:30:44.537 [2024-11-20 11:07:33.766932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.537 [2024-11-20 11:07:33.782912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:44.797 [2024-11-20 11:07:33.792413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:44.797 Validate MD5 checksum, iteration 1 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:44.797 11:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:44.797 [2024-11-20 11:07:33.928061] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:44.797 [2024-11-20 11:07:33.928401] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84009 ] 00:30:45.056 [2024-11-20 11:07:34.108368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.056 [2024-11-20 11:07:34.212730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.961  [2024-11-20T11:07:36.473Z] Copying: 716/1024 [MB] (716 MBps) [2024-11-20T11:07:39.007Z] Copying: 1024/1024 [MB] (average 713 MBps) 00:30:49.754 00:30:49.754 11:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:49.754 11:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:51.658 Validate MD5 checksum, iteration 2 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7ac4f8eab97cfa05e9f589913700d385 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7ac4f8eab97cfa05e9f589913700d385 != \7\a\c\4\f\8\e\a\b\9\7\c\f\a\0\5\e\9\f\5\8\9\9\1\3\7\0\0\d\3\8\5 ]] 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:51.658 11:07:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:51.658 [2024-11-20 11:07:40.687888] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:30:51.658 [2024-11-20 11:07:40.688183] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84077 ] 00:30:51.658 [2024-11-20 11:07:40.865783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.917 [2024-11-20 11:07:40.973898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.821  [2024-11-20T11:07:43.333Z] Copying: 708/1024 [MB] (708 MBps) [2024-11-20T11:07:44.711Z] Copying: 1024/1024 [MB] (average 711 MBps) 00:30:55.459 00:30:55.459 11:07:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:55.459 11:07:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:56.836 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:56.836 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=1c4cbf70501ed323917280296448796d 00:30:56.836 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 1c4cbf70501ed323917280296448796d != \1\c\4\c\b\f\7\0\5\0\1\e\d\3\2\3\9\1\7\2\8\0\2\9\6\4\4\8\7\9\6\d ]] 00:30:56.837 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:56.837 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:56.837 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:56.837 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:56.837 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:56.837 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83969 ]] 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83969 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83969 ']' 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83969 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83969 00:30:57.096 killing process with pid 83969 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83969' 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83969 00:30:57.096 11:07:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83969 00:30:58.475 [2024-11-20 11:07:47.334955] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:58.475 [2024-11-20 11:07:47.354031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.354075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:58.475 [2024-11-20 11:07:47.354091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:58.475 [2024-11-20 11:07:47.354117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.354140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:58.475 [2024-11-20 11:07:47.358149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.358176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:58.475 [2024-11-20 11:07:47.358188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.000 ms 00:30:58.475 [2024-11-20 11:07:47.358219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.358418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.358431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:58.475 [2024-11-20 11:07:47.358443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:30:58.475 [2024-11-20 11:07:47.358453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.364110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.364150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:58.475 [2024-11-20 11:07:47.364164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.649 ms 00:30:58.475 [2024-11-20 11:07:47.364174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.365136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.365161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:58.475 [2024-11-20 11:07:47.365173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.921 ms 00:30:58.475 [2024-11-20 11:07:47.365183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.380296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.380459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:58.475 [2024-11-20 11:07:47.380482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.105 ms 00:30:58.475 [2024-11-20 11:07:47.380498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.388523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.388558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:58.475 [2024-11-20 11:07:47.388571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.971 ms 00:30:58.475 [2024-11-20 11:07:47.388581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.388712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.388727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:58.475 [2024-11-20 11:07:47.388738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.061 ms 00:30:58.475 [2024-11-20 11:07:47.388748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.403310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.403454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:58.475 [2024-11-20 11:07:47.403474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.564 ms 00:30:58.475 [2024-11-20 11:07:47.403483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.418879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.418916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:58.475 [2024-11-20 11:07:47.418928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.363 ms 00:30:58.475 [2024-11-20 11:07:47.418938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.433036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.433188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:58.475 [2024-11-20 11:07:47.433209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.086 ms 00:30:58.475 [2024-11-20 11:07:47.433219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.447604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.447744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:58.475 [2024-11-20 11:07:47.447764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.317 ms 00:30:58.475 [2024-11-20 11:07:47.447775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.475 [2024-11-20 11:07:47.447830] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:58.475 [2024-11-20 11:07:47.447846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:58.475 [2024-11-20 11:07:47.447859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:58.475 [2024-11-20 11:07:47.447870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:58.475 [2024-11-20 11:07:47.447881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.447997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.448007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.448017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.448028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:58.475 [2024-11-20 11:07:47.448040] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:58.475 [2024-11-20 11:07:47.448050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: b3a5580e-a5cf-4819-83b7-f8caa8f926c3 00:30:58.475 [2024-11-20 11:07:47.448061] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:58.475 [2024-11-20 11:07:47.448071] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:58.475 [2024-11-20 11:07:47.448081] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:58.475 [2024-11-20 11:07:47.448091] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:58.475 [2024-11-20 11:07:47.448101] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:58.475 [2024-11-20 11:07:47.448111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:58.475 [2024-11-20 11:07:47.448121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:58.475 [2024-11-20 11:07:47.448130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:58.475 [2024-11-20 11:07:47.448139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:58.475 [2024-11-20 11:07:47.448149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.475 [2024-11-20 11:07:47.448167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:58.475 [2024-11-20 11:07:47.448178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:30:58.476 [2024-11-20 11:07:47.448188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.476 [2024-11-20 11:07:47.468442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.476 [2024-11-20 11:07:47.468473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:58.476 [2024-11-20 11:07:47.468487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.256 ms 00:30:58.476 [2024-11-20 11:07:47.468498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.476 [2024-11-20 11:07:47.469061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.476 [2024-11-20 11:07:47.469073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:58.476 [2024-11-20 11:07:47.469084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:30:58.476 [2024-11-20 11:07:47.469094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.476 [2024-11-20 11:07:47.533947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.476 [2024-11-20 11:07:47.534114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:58.476 [2024-11-20 11:07:47.534135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.476 [2024-11-20 11:07:47.534146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.476 [2024-11-20 11:07:47.534189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.476 [2024-11-20 11:07:47.534200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:58.476 [2024-11-20 11:07:47.534211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.476 [2024-11-20 11:07:47.534220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.476 [2024-11-20 11:07:47.534317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.476 [2024-11-20 11:07:47.534330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:58.476 [2024-11-20 11:07:47.534341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.476 [2024-11-20 11:07:47.534351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.476 [2024-11-20 11:07:47.534369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.476 [2024-11-20 11:07:47.534385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:58.476 [2024-11-20 11:07:47.534396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.476 [2024-11-20 11:07:47.534406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.476 [2024-11-20 11:07:47.656754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.476 [2024-11-20 11:07:47.656806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:58.476 [2024-11-20 11:07:47.656821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.476 [2024-11-20 11:07:47.656848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.735 [2024-11-20 11:07:47.755667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.735 [2024-11-20 11:07:47.755899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:58.735 [2024-11-20 11:07:47.755922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.735 [2024-11-20 11:07:47.755934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.735 [2024-11-20 11:07:47.756038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.735 [2024-11-20 11:07:47.756051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:58.735 [2024-11-20 11:07:47.756062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.735 [2024-11-20 11:07:47.756073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.735 [2024-11-20 11:07:47.756117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.735 [2024-11-20 11:07:47.756128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:58.735 [2024-11-20 11:07:47.756145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.735 [2024-11-20 11:07:47.756165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.735 [2024-11-20 11:07:47.756272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.735 [2024-11-20 11:07:47.756285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:58.735 [2024-11-20 11:07:47.756295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.735 [2024-11-20 11:07:47.756305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.735 [2024-11-20 11:07:47.756342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.735 [2024-11-20 11:07:47.756354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:58.735 [2024-11-20 11:07:47.756364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.735 [2024-11-20 11:07:47.756378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.735 [2024-11-20 11:07:47.756416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.736 [2024-11-20 11:07:47.756427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:58.736 [2024-11-20 11:07:47.756437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.736 [2024-11-20 11:07:47.756447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.736 [2024-11-20 11:07:47.756487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.736 [2024-11-20 11:07:47.756498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:58.736 [2024-11-20 11:07:47.756512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.736 [2024-11-20 11:07:47.756522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.736 [2024-11-20 11:07:47.756662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 403.225 ms, result 0 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:01.269 Remove shared memory files 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83743 00:31:01.269 11:07:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:01.269 11:07:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:01.269 ************************************ 00:31:01.269 END TEST ftl_upgrade_shutdown 00:31:01.269 ************************************ 00:31:01.269 00:31:01.269 real 1m29.924s 00:31:01.269 user 2m2.150s 00:31:01.269 sys 0m21.319s 00:31:01.269 11:07:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:01.269 11:07:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:01.269 11:07:50 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:01.269 11:07:50 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:01.269 11:07:50 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:01.269 11:07:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:01.269 11:07:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:01.269 ************************************ 00:31:01.269 START TEST ftl_restore_fast 00:31:01.269 ************************************ 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:01.269 * Looking for test storage... 00:31:01.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:01.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.269 --rc genhtml_branch_coverage=1 00:31:01.269 --rc genhtml_function_coverage=1 00:31:01.269 --rc genhtml_legend=1 00:31:01.269 --rc geninfo_all_blocks=1 00:31:01.269 --rc geninfo_unexecuted_blocks=1 00:31:01.269 00:31:01.269 ' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:01.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.269 --rc genhtml_branch_coverage=1 00:31:01.269 --rc genhtml_function_coverage=1 00:31:01.269 --rc genhtml_legend=1 00:31:01.269 --rc geninfo_all_blocks=1 00:31:01.269 --rc geninfo_unexecuted_blocks=1 00:31:01.269 00:31:01.269 ' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:01.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.269 --rc genhtml_branch_coverage=1 00:31:01.269 --rc genhtml_function_coverage=1 00:31:01.269 --rc genhtml_legend=1 00:31:01.269 --rc geninfo_all_blocks=1 00:31:01.269 --rc geninfo_unexecuted_blocks=1 00:31:01.269 00:31:01.269 ' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:01.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:01.269 --rc genhtml_branch_coverage=1 00:31:01.269 --rc genhtml_function_coverage=1 00:31:01.269 --rc genhtml_legend=1 00:31:01.269 --rc geninfo_all_blocks=1 00:31:01.269 --rc geninfo_unexecuted_blocks=1 00:31:01.269 00:31:01.269 ' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.269 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.dz5LJgjCsV 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=84251 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 84251 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 84251 ']' 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.270 11:07:50 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:01.270 [2024-11-20 11:07:50.400629] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:31:01.270 [2024-11-20 11:07:50.400961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84251 ] 00:31:01.529 [2024-11-20 11:07:50.582133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.529 [2024-11-20 11:07:50.681401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.466 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.466 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:02.466 11:07:51 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:02.466 11:07:51 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:02.466 11:07:51 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:02.466 11:07:51 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:02.466 11:07:51 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:02.466 11:07:51 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:02.725 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:02.725 { 00:31:02.725 "name": "nvme0n1", 00:31:02.725 "aliases": [ 00:31:02.725 "93f978fd-495c-4e69-938b-8980291b2f6c" 00:31:02.725 ], 00:31:02.725 "product_name": "NVMe disk", 00:31:02.725 "block_size": 4096, 00:31:02.725 "num_blocks": 1310720, 00:31:02.725 "uuid": "93f978fd-495c-4e69-938b-8980291b2f6c", 00:31:02.725 "numa_id": -1, 00:31:02.725 "assigned_rate_limits": { 00:31:02.725 "rw_ios_per_sec": 0, 00:31:02.725 "rw_mbytes_per_sec": 0, 00:31:02.725 "r_mbytes_per_sec": 0, 00:31:02.725 "w_mbytes_per_sec": 0 00:31:02.725 }, 00:31:02.725 "claimed": true, 00:31:02.725 "claim_type": "read_many_write_one", 00:31:02.725 "zoned": false, 00:31:02.725 "supported_io_types": { 00:31:02.725 "read": true, 00:31:02.725 "write": true, 00:31:02.725 "unmap": true, 00:31:02.725 "flush": true, 00:31:02.725 "reset": true, 00:31:02.726 "nvme_admin": true, 00:31:02.726 "nvme_io": true, 00:31:02.726 "nvme_io_md": false, 00:31:02.726 "write_zeroes": true, 00:31:02.726 "zcopy": false, 00:31:02.726 "get_zone_info": false, 00:31:02.726 "zone_management": false, 00:31:02.726 "zone_append": false, 00:31:02.726 "compare": true, 00:31:02.726 "compare_and_write": false, 00:31:02.726 "abort": true, 00:31:02.726 "seek_hole": false, 00:31:02.726 "seek_data": false, 00:31:02.726 "copy": true, 00:31:02.726 "nvme_iov_md": false 00:31:02.726 }, 00:31:02.726 "driver_specific": { 00:31:02.726 "nvme": [ 00:31:02.726 { 00:31:02.726 "pci_address": "0000:00:11.0", 00:31:02.726 "trid": { 00:31:02.726 "trtype": "PCIe", 00:31:02.726 "traddr": "0000:00:11.0" 00:31:02.726 }, 00:31:02.726 "ctrlr_data": { 00:31:02.726 "cntlid": 0, 00:31:02.726 "vendor_id": "0x1b36", 00:31:02.726 "model_number": "QEMU NVMe Ctrl", 00:31:02.726 "serial_number": "12341", 00:31:02.726 "firmware_revision": "8.0.0", 00:31:02.726 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:02.726 "oacs": { 00:31:02.726 "security": 0, 00:31:02.726 "format": 1, 00:31:02.726 "firmware": 0, 00:31:02.726 "ns_manage": 1 00:31:02.726 }, 00:31:02.726 "multi_ctrlr": false, 00:31:02.726 "ana_reporting": false 00:31:02.726 }, 00:31:02.726 "vs": { 00:31:02.726 "nvme_version": "1.4" 00:31:02.726 }, 00:31:02.726 "ns_data": { 00:31:02.726 "id": 1, 00:31:02.726 "can_share": false 00:31:02.726 } 00:31:02.726 } 00:31:02.726 ], 00:31:02.726 "mp_policy": "active_passive" 00:31:02.726 } 00:31:02.726 } 00:31:02.726 ]' 00:31:02.726 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:02.985 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:02.985 11:07:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=1f321274-087d-49f9-8633-9bbc4207f259 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:02.985 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f321274-087d-49f9-8633-9bbc4207f259 00:31:03.243 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:03.502 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=976df8ff-b8b5-4826-bb25-83a0f875f10c 00:31:03.502 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 976df8ff-b8b5-4826-bb25-83a0f875f10c 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=c666e041-0276-4d22-9856-55b9939cd0a2 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c666e041-0276-4d22-9856-55b9939cd0a2 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=c666e041-0276-4d22-9856-55b9939cd0a2 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size c666e041-0276-4d22-9856-55b9939cd0a2 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c666e041-0276-4d22-9856-55b9939cd0a2 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:03.761 11:07:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c666e041-0276-4d22-9856-55b9939cd0a2 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:04.020 { 00:31:04.020 "name": "c666e041-0276-4d22-9856-55b9939cd0a2", 00:31:04.020 "aliases": [ 00:31:04.020 "lvs/nvme0n1p0" 00:31:04.020 ], 00:31:04.020 "product_name": "Logical Volume", 00:31:04.020 "block_size": 4096, 00:31:04.020 "num_blocks": 26476544, 00:31:04.020 "uuid": "c666e041-0276-4d22-9856-55b9939cd0a2", 00:31:04.020 "assigned_rate_limits": { 00:31:04.020 "rw_ios_per_sec": 0, 00:31:04.020 "rw_mbytes_per_sec": 0, 00:31:04.020 "r_mbytes_per_sec": 0, 00:31:04.020 "w_mbytes_per_sec": 0 00:31:04.020 }, 00:31:04.020 "claimed": false, 00:31:04.020 "zoned": false, 00:31:04.020 "supported_io_types": { 00:31:04.020 "read": true, 00:31:04.020 "write": true, 00:31:04.020 "unmap": true, 00:31:04.020 "flush": false, 00:31:04.020 "reset": true, 00:31:04.020 "nvme_admin": false, 00:31:04.020 "nvme_io": false, 00:31:04.020 "nvme_io_md": false, 00:31:04.020 "write_zeroes": true, 00:31:04.020 "zcopy": false, 00:31:04.020 "get_zone_info": false, 00:31:04.020 "zone_management": false, 00:31:04.020 "zone_append": false, 00:31:04.020 "compare": false, 00:31:04.020 "compare_and_write": false, 00:31:04.020 "abort": false, 00:31:04.020 "seek_hole": true, 00:31:04.020 "seek_data": true, 00:31:04.020 "copy": false, 00:31:04.020 "nvme_iov_md": false 00:31:04.020 }, 00:31:04.020 "driver_specific": { 00:31:04.020 "lvol": { 00:31:04.020 "lvol_store_uuid": "976df8ff-b8b5-4826-bb25-83a0f875f10c", 00:31:04.020 "base_bdev": "nvme0n1", 00:31:04.020 "thin_provision": true, 00:31:04.020 "num_allocated_clusters": 0, 00:31:04.020 "snapshot": false, 00:31:04.020 "clone": false, 00:31:04.020 "esnap_clone": false 00:31:04.020 } 00:31:04.020 } 00:31:04.020 } 00:31:04.020 ]' 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:04.020 11:07:53 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:04.279 11:07:53 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:04.279 11:07:53 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:04.279 11:07:53 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size c666e041-0276-4d22-9856-55b9939cd0a2 00:31:04.279 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c666e041-0276-4d22-9856-55b9939cd0a2 00:31:04.279 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:04.279 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:04.279 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:04.279 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c666e041-0276-4d22-9856-55b9939cd0a2 00:31:04.537 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:04.538 { 00:31:04.538 "name": "c666e041-0276-4d22-9856-55b9939cd0a2", 00:31:04.538 "aliases": [ 00:31:04.538 "lvs/nvme0n1p0" 00:31:04.538 ], 00:31:04.538 "product_name": "Logical Volume", 00:31:04.538 "block_size": 4096, 00:31:04.538 "num_blocks": 26476544, 00:31:04.538 "uuid": "c666e041-0276-4d22-9856-55b9939cd0a2", 00:31:04.538 "assigned_rate_limits": { 00:31:04.538 "rw_ios_per_sec": 0, 00:31:04.538 "rw_mbytes_per_sec": 0, 00:31:04.538 "r_mbytes_per_sec": 0, 00:31:04.538 "w_mbytes_per_sec": 0 00:31:04.538 }, 00:31:04.538 "claimed": false, 00:31:04.538 "zoned": false, 00:31:04.538 "supported_io_types": { 00:31:04.538 "read": true, 00:31:04.538 "write": true, 00:31:04.538 "unmap": true, 00:31:04.538 "flush": false, 00:31:04.538 "reset": true, 00:31:04.538 "nvme_admin": false, 00:31:04.538 "nvme_io": false, 00:31:04.538 "nvme_io_md": false, 00:31:04.538 "write_zeroes": true, 00:31:04.538 "zcopy": false, 00:31:04.538 "get_zone_info": false, 00:31:04.538 "zone_management": false, 00:31:04.538 "zone_append": false, 00:31:04.538 "compare": false, 00:31:04.538 "compare_and_write": false, 00:31:04.538 "abort": false, 00:31:04.538 "seek_hole": true, 00:31:04.538 "seek_data": true, 00:31:04.538 "copy": false, 00:31:04.538 "nvme_iov_md": false 00:31:04.538 }, 00:31:04.538 "driver_specific": { 00:31:04.538 "lvol": { 00:31:04.538 "lvol_store_uuid": "976df8ff-b8b5-4826-bb25-83a0f875f10c", 00:31:04.538 "base_bdev": "nvme0n1", 00:31:04.538 "thin_provision": true, 00:31:04.538 "num_allocated_clusters": 0, 00:31:04.538 "snapshot": false, 00:31:04.538 "clone": false, 00:31:04.538 "esnap_clone": false 00:31:04.538 } 00:31:04.538 } 00:31:04.538 } 00:31:04.538 ]' 00:31:04.538 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:04.538 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:04.538 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:04.538 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:04.538 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:04.538 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:04.538 11:07:53 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:04.538 11:07:53 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:04.796 11:07:53 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:04.796 11:07:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size c666e041-0276-4d22-9856-55b9939cd0a2 00:31:04.796 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=c666e041-0276-4d22-9856-55b9939cd0a2 00:31:04.796 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:04.796 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:04.796 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:04.796 11:07:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c666e041-0276-4d22-9856-55b9939cd0a2 00:31:04.797 11:07:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:04.797 { 00:31:04.797 "name": "c666e041-0276-4d22-9856-55b9939cd0a2", 00:31:04.797 "aliases": [ 00:31:04.797 "lvs/nvme0n1p0" 00:31:04.797 ], 00:31:04.797 "product_name": "Logical Volume", 00:31:04.797 "block_size": 4096, 00:31:04.797 "num_blocks": 26476544, 00:31:04.797 "uuid": "c666e041-0276-4d22-9856-55b9939cd0a2", 00:31:04.797 "assigned_rate_limits": { 00:31:04.797 "rw_ios_per_sec": 0, 00:31:04.797 "rw_mbytes_per_sec": 0, 00:31:04.797 "r_mbytes_per_sec": 0, 00:31:04.797 "w_mbytes_per_sec": 0 00:31:04.797 }, 00:31:04.797 "claimed": false, 00:31:04.797 "zoned": false, 00:31:04.797 "supported_io_types": { 00:31:04.797 "read": true, 00:31:04.797 "write": true, 00:31:04.797 "unmap": true, 00:31:04.797 "flush": false, 00:31:04.797 "reset": true, 00:31:04.797 "nvme_admin": false, 00:31:04.797 "nvme_io": false, 00:31:04.797 "nvme_io_md": false, 00:31:04.797 "write_zeroes": true, 00:31:04.797 "zcopy": false, 00:31:04.797 "get_zone_info": false, 00:31:04.797 "zone_management": false, 00:31:04.797 "zone_append": false, 00:31:04.797 "compare": false, 00:31:04.797 "compare_and_write": false, 00:31:04.797 "abort": false, 00:31:04.797 "seek_hole": true, 00:31:04.797 "seek_data": true, 00:31:04.797 "copy": false, 00:31:04.797 "nvme_iov_md": false 00:31:04.797 }, 00:31:04.797 "driver_specific": { 00:31:04.797 "lvol": { 00:31:04.797 "lvol_store_uuid": "976df8ff-b8b5-4826-bb25-83a0f875f10c", 00:31:04.797 "base_bdev": "nvme0n1", 00:31:04.797 "thin_provision": true, 00:31:04.797 "num_allocated_clusters": 0, 00:31:04.797 "snapshot": false, 00:31:04.797 "clone": false, 00:31:04.797 "esnap_clone": false 00:31:04.797 } 00:31:04.797 } 00:31:04.797 } 00:31:04.797 ]' 00:31:04.797 11:07:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c666e041-0276-4d22-9856-55b9939cd0a2 --l2p_dram_limit 10' 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:05.056 11:07:54 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c666e041-0276-4d22-9856-55b9939cd0a2 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:05.056 [2024-11-20 11:07:54.264171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.264372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:05.056 [2024-11-20 11:07:54.264548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:05.056 [2024-11-20 11:07:54.264588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.264705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.264814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:05.056 [2024-11-20 11:07:54.264877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:05.056 [2024-11-20 11:07:54.264908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.264963] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:05.056 [2024-11-20 11:07:54.265998] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:05.056 [2024-11-20 11:07:54.266143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.266213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:05.056 [2024-11-20 11:07:54.266252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:31:05.056 [2024-11-20 11:07:54.266282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.266513] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1b367e70-c7da-47b1-b21d-8a8452023d94 00:31:05.056 [2024-11-20 11:07:54.267977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.268120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:05.056 [2024-11-20 11:07:54.268195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:05.056 [2024-11-20 11:07:54.268237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.275765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.275911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:05.056 [2024-11-20 11:07:54.276057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.414 ms 00:31:05.056 [2024-11-20 11:07:54.276098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.276221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.276363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:05.056 [2024-11-20 11:07:54.276435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:05.056 [2024-11-20 11:07:54.276472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.276557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.276613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:05.056 [2024-11-20 11:07:54.276649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:05.056 [2024-11-20 11:07:54.276862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.276920] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:05.056 [2024-11-20 11:07:54.282111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.282242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:05.056 [2024-11-20 11:07:54.282370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.206 ms 00:31:05.056 [2024-11-20 11:07:54.282406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.282466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.056 [2024-11-20 11:07:54.282644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:05.056 [2024-11-20 11:07:54.282666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:05.056 [2024-11-20 11:07:54.282676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.056 [2024-11-20 11:07:54.282720] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:05.056 [2024-11-20 11:07:54.282846] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:05.056 [2024-11-20 11:07:54.282865] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:05.056 [2024-11-20 11:07:54.282879] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:05.056 [2024-11-20 11:07:54.282895] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:05.056 [2024-11-20 11:07:54.282907] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:05.056 [2024-11-20 11:07:54.282921] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:05.056 [2024-11-20 11:07:54.282931] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:05.056 [2024-11-20 11:07:54.282946] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:05.057 [2024-11-20 11:07:54.282956] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:05.057 [2024-11-20 11:07:54.282969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.057 [2024-11-20 11:07:54.282979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:05.057 [2024-11-20 11:07:54.282992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:31:05.057 [2024-11-20 11:07:54.283014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.057 [2024-11-20 11:07:54.283089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.057 [2024-11-20 11:07:54.283100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:05.057 [2024-11-20 11:07:54.283113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:05.057 [2024-11-20 11:07:54.283122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.057 [2024-11-20 11:07:54.283219] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:05.057 [2024-11-20 11:07:54.283232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:05.057 [2024-11-20 11:07:54.283244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:05.057 [2024-11-20 11:07:54.283255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:05.057 [2024-11-20 11:07:54.283277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:05.057 [2024-11-20 11:07:54.283298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:05.057 [2024-11-20 11:07:54.283310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:05.057 [2024-11-20 11:07:54.283331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:05.057 [2024-11-20 11:07:54.283341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:05.057 [2024-11-20 11:07:54.283352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:05.057 [2024-11-20 11:07:54.283361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:05.057 [2024-11-20 11:07:54.283373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:05.057 [2024-11-20 11:07:54.283383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:05.057 [2024-11-20 11:07:54.283410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:05.057 [2024-11-20 11:07:54.283421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:05.057 [2024-11-20 11:07:54.283442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.057 [2024-11-20 11:07:54.283463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:05.057 [2024-11-20 11:07:54.283472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.057 [2024-11-20 11:07:54.283493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:05.057 [2024-11-20 11:07:54.283505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.057 [2024-11-20 11:07:54.283526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:05.057 [2024-11-20 11:07:54.283535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:05.057 [2024-11-20 11:07:54.283555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:05.057 [2024-11-20 11:07:54.283569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:05.057 [2024-11-20 11:07:54.283589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:05.057 [2024-11-20 11:07:54.283768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:05.057 [2024-11-20 11:07:54.283806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:05.057 [2024-11-20 11:07:54.283837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:05.057 [2024-11-20 11:07:54.283869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:05.057 [2024-11-20 11:07:54.283945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.057 [2024-11-20 11:07:54.283984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:05.057 [2024-11-20 11:07:54.284014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:05.057 [2024-11-20 11:07:54.284046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.057 [2024-11-20 11:07:54.284075] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:05.057 [2024-11-20 11:07:54.284141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:05.057 [2024-11-20 11:07:54.284229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:05.057 [2024-11-20 11:07:54.284309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:05.057 [2024-11-20 11:07:54.284345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:05.057 [2024-11-20 11:07:54.284382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:05.057 [2024-11-20 11:07:54.284412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:05.057 [2024-11-20 11:07:54.284573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:05.057 [2024-11-20 11:07:54.284627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:05.057 [2024-11-20 11:07:54.284662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:05.057 [2024-11-20 11:07:54.284697] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:05.057 [2024-11-20 11:07:54.284799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:05.057 [2024-11-20 11:07:54.284856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:05.057 [2024-11-20 11:07:54.284906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:05.057 [2024-11-20 11:07:54.284984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:05.057 [2024-11-20 11:07:54.285003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:05.057 [2024-11-20 11:07:54.285014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:05.057 [2024-11-20 11:07:54.285026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:05.057 [2024-11-20 11:07:54.285037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:05.057 [2024-11-20 11:07:54.285050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:05.057 [2024-11-20 11:07:54.285060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:05.057 [2024-11-20 11:07:54.285076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:05.057 [2024-11-20 11:07:54.285086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:05.057 [2024-11-20 11:07:54.285101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:05.057 [2024-11-20 11:07:54.285111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:05.057 [2024-11-20 11:07:54.285124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:05.057 [2024-11-20 11:07:54.285134] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:05.057 [2024-11-20 11:07:54.285149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:05.057 [2024-11-20 11:07:54.285160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:05.057 [2024-11-20 11:07:54.285174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:05.057 [2024-11-20 11:07:54.285184] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:05.057 [2024-11-20 11:07:54.285197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:05.057 [2024-11-20 11:07:54.285209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.057 [2024-11-20 11:07:54.285222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:05.057 [2024-11-20 11:07:54.285233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:31:05.057 [2024-11-20 11:07:54.285246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.057 [2024-11-20 11:07:54.285294] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:05.057 [2024-11-20 11:07:54.285315] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:09.256 [2024-11-20 11:07:58.025979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.026233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:09.256 [2024-11-20 11:07:58.026343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3746.757 ms 00:31:09.256 [2024-11-20 11:07:58.026384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.064376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.064569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:09.256 [2024-11-20 11:07:58.064688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.614 ms 00:31:09.256 [2024-11-20 11:07:58.064732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.064883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.064980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:09.256 [2024-11-20 11:07:58.065019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:09.256 [2024-11-20 11:07:58.065056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.110068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.110230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:09.256 [2024-11-20 11:07:58.110309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.943 ms 00:31:09.256 [2024-11-20 11:07:58.110352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.110406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.110444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:09.256 [2024-11-20 11:07:58.110474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:09.256 [2024-11-20 11:07:58.110514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.111040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.111093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:09.256 [2024-11-20 11:07:58.111191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:31:09.256 [2024-11-20 11:07:58.111230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.111358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.111443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:09.256 [2024-11-20 11:07:58.111482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:09.256 [2024-11-20 11:07:58.111651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.131736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.131895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:09.256 [2024-11-20 11:07:58.131996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.006 ms 00:31:09.256 [2024-11-20 11:07:58.132037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.144257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:09.256 [2024-11-20 11:07:58.147622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.147741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:09.256 [2024-11-20 11:07:58.147867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.507 ms 00:31:09.256 [2024-11-20 11:07:58.147902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.265333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.265520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:09.256 [2024-11-20 11:07:58.265640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.567 ms 00:31:09.256 [2024-11-20 11:07:58.265680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.265895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.265995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:09.256 [2024-11-20 11:07:58.266039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:31:09.256 [2024-11-20 11:07:58.266070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.301257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.256 [2024-11-20 11:07:58.301421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:09.256 [2024-11-20 11:07:58.301502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.126 ms 00:31:09.256 [2024-11-20 11:07:58.301539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.256 [2024-11-20 11:07:58.335118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.257 [2024-11-20 11:07:58.335278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:09.257 [2024-11-20 11:07:58.335305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.530 ms 00:31:09.257 [2024-11-20 11:07:58.335316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.257 [2024-11-20 11:07:58.336031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.257 [2024-11-20 11:07:58.336050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:09.257 [2024-11-20 11:07:58.336064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:31:09.257 [2024-11-20 11:07:58.336075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.257 [2024-11-20 11:07:58.435749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.257 [2024-11-20 11:07:58.435796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:09.257 [2024-11-20 11:07:58.435816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.773 ms 00:31:09.257 [2024-11-20 11:07:58.435826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.257 [2024-11-20 11:07:58.471071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.257 [2024-11-20 11:07:58.471109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:09.257 [2024-11-20 11:07:58.471125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.222 ms 00:31:09.257 [2024-11-20 11:07:58.471150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.257 [2024-11-20 11:07:58.505603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.257 [2024-11-20 11:07:58.505765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:09.257 [2024-11-20 11:07:58.505800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.459 ms 00:31:09.257 [2024-11-20 11:07:58.505810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.515 [2024-11-20 11:07:58.540788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.515 [2024-11-20 11:07:58.540951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:09.515 [2024-11-20 11:07:58.540977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.964 ms 00:31:09.515 [2024-11-20 11:07:58.540987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.515 [2024-11-20 11:07:58.541066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.515 [2024-11-20 11:07:58.541079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:09.515 [2024-11-20 11:07:58.541096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:09.515 [2024-11-20 11:07:58.541106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.515 [2024-11-20 11:07:58.541206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.515 [2024-11-20 11:07:58.541218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:09.516 [2024-11-20 11:07:58.541235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:09.516 [2024-11-20 11:07:58.541244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.516 [2024-11-20 11:07:58.542241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4284.588 ms, result 0 00:31:09.516 { 00:31:09.516 "name": "ftl0", 00:31:09.516 "uuid": "1b367e70-c7da-47b1-b21d-8a8452023d94" 00:31:09.516 } 00:31:09.516 11:07:58 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:09.516 11:07:58 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:09.775 11:07:58 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:09.775 11:07:58 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:09.775 [2024-11-20 11:07:58.964960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.775 [2024-11-20 11:07:58.965017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:09.775 [2024-11-20 11:07:58.965034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:09.775 [2024-11-20 11:07:58.965056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.775 [2024-11-20 11:07:58.965083] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:09.775 [2024-11-20 11:07:58.969158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.775 [2024-11-20 11:07:58.969329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:09.775 [2024-11-20 11:07:58.969355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:31:09.775 [2024-11-20 11:07:58.969366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.775 [2024-11-20 11:07:58.969641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.775 [2024-11-20 11:07:58.969656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:09.775 [2024-11-20 11:07:58.969673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:31:09.775 [2024-11-20 11:07:58.969683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.775 [2024-11-20 11:07:58.972198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.775 [2024-11-20 11:07:58.972222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:09.775 [2024-11-20 11:07:58.972236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.500 ms 00:31:09.775 [2024-11-20 11:07:58.972246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.775 [2024-11-20 11:07:58.977247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.775 [2024-11-20 11:07:58.977385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:09.775 [2024-11-20 11:07:58.977486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.983 ms 00:31:09.775 [2024-11-20 11:07:58.977521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:09.775 [2024-11-20 11:07:59.012946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:09.775 [2024-11-20 11:07:59.013111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:09.775 [2024-11-20 11:07:59.013217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.373 ms 00:31:09.775 [2024-11-20 11:07:59.013233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.035 [2024-11-20 11:07:59.035658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.035 [2024-11-20 11:07:59.035694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:10.035 [2024-11-20 11:07:59.035711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.414 ms 00:31:10.035 [2024-11-20 11:07:59.035737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.035 [2024-11-20 11:07:59.035887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.035 [2024-11-20 11:07:59.035901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:10.035 [2024-11-20 11:07:59.035914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:31:10.035 [2024-11-20 11:07:59.035924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.035 [2024-11-20 11:07:59.070756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.035 [2024-11-20 11:07:59.070886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:10.035 [2024-11-20 11:07:59.070928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.866 ms 00:31:10.035 [2024-11-20 11:07:59.070938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.035 [2024-11-20 11:07:59.107319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.035 [2024-11-20 11:07:59.107356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:10.035 [2024-11-20 11:07:59.107372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.377 ms 00:31:10.035 [2024-11-20 11:07:59.107382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.035 [2024-11-20 11:07:59.141930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.035 [2024-11-20 11:07:59.141964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:10.035 [2024-11-20 11:07:59.141978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.557 ms 00:31:10.035 [2024-11-20 11:07:59.142003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.035 [2024-11-20 11:07:59.175468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.035 [2024-11-20 11:07:59.175504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:10.035 [2024-11-20 11:07:59.175519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.424 ms 00:31:10.035 [2024-11-20 11:07:59.175543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.035 [2024-11-20 11:07:59.175607] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:10.035 [2024-11-20 11:07:59.175623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.175989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:10.035 [2024-11-20 11:07:59.176171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:10.036 [2024-11-20 11:07:59.176891] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:10.036 [2024-11-20 11:07:59.176906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b367e70-c7da-47b1-b21d-8a8452023d94 00:31:10.036 [2024-11-20 11:07:59.176917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:10.036 [2024-11-20 11:07:59.176932] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:10.036 [2024-11-20 11:07:59.176941] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:10.036 [2024-11-20 11:07:59.176957] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:10.036 [2024-11-20 11:07:59.176966] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:10.036 [2024-11-20 11:07:59.176979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:10.036 [2024-11-20 11:07:59.176989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:10.036 [2024-11-20 11:07:59.177000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:10.036 [2024-11-20 11:07:59.177009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:10.036 [2024-11-20 11:07:59.177020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.036 [2024-11-20 11:07:59.177030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:10.036 [2024-11-20 11:07:59.177044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:31:10.036 [2024-11-20 11:07:59.177053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.036 [2024-11-20 11:07:59.196177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.036 [2024-11-20 11:07:59.196209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:10.037 [2024-11-20 11:07:59.196224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.096 ms 00:31:10.037 [2024-11-20 11:07:59.196234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.037 [2024-11-20 11:07:59.196734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:10.037 [2024-11-20 11:07:59.196748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:10.037 [2024-11-20 11:07:59.196761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:31:10.037 [2024-11-20 11:07:59.196774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.037 [2024-11-20 11:07:59.257983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.037 [2024-11-20 11:07:59.258136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:10.037 [2024-11-20 11:07:59.258161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.037 [2024-11-20 11:07:59.258172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.037 [2024-11-20 11:07:59.258230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.037 [2024-11-20 11:07:59.258241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:10.037 [2024-11-20 11:07:59.258254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.037 [2024-11-20 11:07:59.258266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.037 [2024-11-20 11:07:59.258363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.037 [2024-11-20 11:07:59.258376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:10.037 [2024-11-20 11:07:59.258389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.037 [2024-11-20 11:07:59.258399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.037 [2024-11-20 11:07:59.258423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.037 [2024-11-20 11:07:59.258434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:10.037 [2024-11-20 11:07:59.258446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.037 [2024-11-20 11:07:59.258456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.374053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.296 [2024-11-20 11:07:59.374099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:10.296 [2024-11-20 11:07:59.374117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.296 [2024-11-20 11:07:59.374127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.468696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.296 [2024-11-20 11:07:59.468894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:10.296 [2024-11-20 11:07:59.468922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.296 [2024-11-20 11:07:59.468936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.469041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.296 [2024-11-20 11:07:59.469053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:10.296 [2024-11-20 11:07:59.469067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.296 [2024-11-20 11:07:59.469077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.469132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.296 [2024-11-20 11:07:59.469143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:10.296 [2024-11-20 11:07:59.469156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.296 [2024-11-20 11:07:59.469166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.469298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.296 [2024-11-20 11:07:59.469312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:10.296 [2024-11-20 11:07:59.469325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.296 [2024-11-20 11:07:59.469334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.469378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.296 [2024-11-20 11:07:59.469391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:10.296 [2024-11-20 11:07:59.469403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.296 [2024-11-20 11:07:59.469413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.469454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.296 [2024-11-20 11:07:59.469468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:10.296 [2024-11-20 11:07:59.469480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.296 [2024-11-20 11:07:59.469490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.469538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:10.296 [2024-11-20 11:07:59.469549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:10.296 [2024-11-20 11:07:59.469562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:10.296 [2024-11-20 11:07:59.469572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:10.296 [2024-11-20 11:07:59.469747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 505.569 ms, result 0 00:31:10.296 true 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 84251 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84251 ']' 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84251 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84251 00:31:10.296 killing process with pid 84251 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84251' 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 84251 00:31:10.296 11:07:59 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 84251 00:31:15.658 11:08:04 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:18.948 262144+0 records in 00:31:18.948 262144+0 records out 00:31:18.948 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.80261 s, 282 MB/s 00:31:18.948 11:08:08 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:20.850 11:08:09 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:20.850 [2024-11-20 11:08:09.744856] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:31:20.850 [2024-11-20 11:08:09.745165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84487 ] 00:31:20.850 [2024-11-20 11:08:09.919657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.850 [2024-11-20 11:08:10.023693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.421 [2024-11-20 11:08:10.364995] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:21.421 [2024-11-20 11:08:10.365056] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:21.421 [2024-11-20 11:08:10.523988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.421 [2024-11-20 11:08:10.524036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:21.421 [2024-11-20 11:08:10.524057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:21.421 [2024-11-20 11:08:10.524067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.421 [2024-11-20 11:08:10.524113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.421 [2024-11-20 11:08:10.524125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:21.421 [2024-11-20 11:08:10.524138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:21.421 [2024-11-20 11:08:10.524147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.421 [2024-11-20 11:08:10.524168] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:21.421 [2024-11-20 11:08:10.525185] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:21.421 [2024-11-20 11:08:10.525207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.421 [2024-11-20 11:08:10.525218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:21.421 [2024-11-20 11:08:10.525228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:31:21.421 [2024-11-20 11:08:10.525238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.421 [2024-11-20 11:08:10.526618] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:21.421 [2024-11-20 11:08:10.546077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.421 [2024-11-20 11:08:10.546255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:21.421 [2024-11-20 11:08:10.546278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.491 ms 00:31:21.421 [2024-11-20 11:08:10.546289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.421 [2024-11-20 11:08:10.546355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.421 [2024-11-20 11:08:10.546368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:21.421 [2024-11-20 11:08:10.546380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:21.421 [2024-11-20 11:08:10.546391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.421 [2024-11-20 11:08:10.553084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.421 [2024-11-20 11:08:10.553233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:21.421 [2024-11-20 11:08:10.553254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.630 ms 00:31:21.421 [2024-11-20 11:08:10.553265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.421 [2024-11-20 11:08:10.553356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.421 [2024-11-20 11:08:10.553371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:21.421 [2024-11-20 11:08:10.553382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:21.421 [2024-11-20 11:08:10.553392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.421 [2024-11-20 11:08:10.553435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.421 [2024-11-20 11:08:10.553448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:21.421 [2024-11-20 11:08:10.553459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:21.421 [2024-11-20 11:08:10.553470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.421 [2024-11-20 11:08:10.553495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:21.421 [2024-11-20 11:08:10.558297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.422 [2024-11-20 11:08:10.558344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:21.422 [2024-11-20 11:08:10.558356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.817 ms 00:31:21.422 [2024-11-20 11:08:10.558370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.422 [2024-11-20 11:08:10.558400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.422 [2024-11-20 11:08:10.558411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:21.422 [2024-11-20 11:08:10.558422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:21.422 [2024-11-20 11:08:10.558431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.422 [2024-11-20 11:08:10.558483] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:21.422 [2024-11-20 11:08:10.558512] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:21.422 [2024-11-20 11:08:10.558562] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:21.422 [2024-11-20 11:08:10.558582] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:21.422 [2024-11-20 11:08:10.558684] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:21.422 [2024-11-20 11:08:10.558699] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:21.422 [2024-11-20 11:08:10.558712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:21.422 [2024-11-20 11:08:10.558725] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:21.422 [2024-11-20 11:08:10.558737] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:21.422 [2024-11-20 11:08:10.558749] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:21.422 [2024-11-20 11:08:10.558758] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:21.422 [2024-11-20 11:08:10.558768] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:21.422 [2024-11-20 11:08:10.558778] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:21.422 [2024-11-20 11:08:10.558793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.422 [2024-11-20 11:08:10.558803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:21.422 [2024-11-20 11:08:10.558814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:31:21.422 [2024-11-20 11:08:10.558823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.422 [2024-11-20 11:08:10.558895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.422 [2024-11-20 11:08:10.558906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:21.422 [2024-11-20 11:08:10.558916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:21.422 [2024-11-20 11:08:10.558926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.422 [2024-11-20 11:08:10.559019] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:21.422 [2024-11-20 11:08:10.559037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:21.422 [2024-11-20 11:08:10.559048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:21.422 [2024-11-20 11:08:10.559078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:21.422 [2024-11-20 11:08:10.559106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:21.422 [2024-11-20 11:08:10.559125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:21.422 [2024-11-20 11:08:10.559134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:21.422 [2024-11-20 11:08:10.559143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:21.422 [2024-11-20 11:08:10.559152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:21.422 [2024-11-20 11:08:10.559162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:21.422 [2024-11-20 11:08:10.559179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:21.422 [2024-11-20 11:08:10.559199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:21.422 [2024-11-20 11:08:10.559227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:21.422 [2024-11-20 11:08:10.559254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:21.422 [2024-11-20 11:08:10.559281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:21.422 [2024-11-20 11:08:10.559308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:21.422 [2024-11-20 11:08:10.559335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:21.422 [2024-11-20 11:08:10.559354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:21.422 [2024-11-20 11:08:10.559363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:21.422 [2024-11-20 11:08:10.559372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:21.422 [2024-11-20 11:08:10.559381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:21.422 [2024-11-20 11:08:10.559390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:21.422 [2024-11-20 11:08:10.559398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:21.422 [2024-11-20 11:08:10.559416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:21.422 [2024-11-20 11:08:10.559426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559436] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:21.422 [2024-11-20 11:08:10.559445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:21.422 [2024-11-20 11:08:10.559454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:21.422 [2024-11-20 11:08:10.559474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:21.422 [2024-11-20 11:08:10.559483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:21.422 [2024-11-20 11:08:10.559493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:21.422 [2024-11-20 11:08:10.559502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:21.422 [2024-11-20 11:08:10.559511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:21.422 [2024-11-20 11:08:10.559520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:21.422 [2024-11-20 11:08:10.559531] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:21.422 [2024-11-20 11:08:10.559543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:21.422 [2024-11-20 11:08:10.559554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:21.422 [2024-11-20 11:08:10.559564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:21.422 [2024-11-20 11:08:10.559574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:21.422 [2024-11-20 11:08:10.559584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:21.422 [2024-11-20 11:08:10.559606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:21.422 [2024-11-20 11:08:10.559617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:21.422 [2024-11-20 11:08:10.559627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:21.422 [2024-11-20 11:08:10.559637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:21.422 [2024-11-20 11:08:10.559648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:21.422 [2024-11-20 11:08:10.559658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:21.422 [2024-11-20 11:08:10.559668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:21.422 [2024-11-20 11:08:10.559678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:21.422 [2024-11-20 11:08:10.559688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:21.422 [2024-11-20 11:08:10.559698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:21.422 [2024-11-20 11:08:10.559708] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:21.423 [2024-11-20 11:08:10.559722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:21.423 [2024-11-20 11:08:10.559733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:21.423 [2024-11-20 11:08:10.559743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:21.423 [2024-11-20 11:08:10.559753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:21.423 [2024-11-20 11:08:10.559765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:21.423 [2024-11-20 11:08:10.559775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.423 [2024-11-20 11:08:10.559785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:21.423 [2024-11-20 11:08:10.559796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:31:21.423 [2024-11-20 11:08:10.559806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.423 [2024-11-20 11:08:10.598235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.423 [2024-11-20 11:08:10.598273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:21.423 [2024-11-20 11:08:10.598288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.446 ms 00:31:21.423 [2024-11-20 11:08:10.598299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.423 [2024-11-20 11:08:10.598383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.423 [2024-11-20 11:08:10.598395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:21.423 [2024-11-20 11:08:10.598406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:21.423 [2024-11-20 11:08:10.598416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.423 [2024-11-20 11:08:10.649415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.423 [2024-11-20 11:08:10.649452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:21.423 [2024-11-20 11:08:10.649466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.028 ms 00:31:21.423 [2024-11-20 11:08:10.649491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.423 [2024-11-20 11:08:10.649527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.423 [2024-11-20 11:08:10.649538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:21.423 [2024-11-20 11:08:10.649549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:21.423 [2024-11-20 11:08:10.649562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.423 [2024-11-20 11:08:10.650190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.423 [2024-11-20 11:08:10.650298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:21.423 [2024-11-20 11:08:10.650318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:31:21.423 [2024-11-20 11:08:10.650328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.423 [2024-11-20 11:08:10.650451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.423 [2024-11-20 11:08:10.650464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:21.423 [2024-11-20 11:08:10.650475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:31:21.423 [2024-11-20 11:08:10.650491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.423 [2024-11-20 11:08:10.669575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.423 [2024-11-20 11:08:10.669754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:21.423 [2024-11-20 11:08:10.669782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.086 ms 00:31:21.423 [2024-11-20 11:08:10.669793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.688452] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:21.683 [2024-11-20 11:08:10.688492] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:21.683 [2024-11-20 11:08:10.688508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.688518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:21.683 [2024-11-20 11:08:10.688530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.643 ms 00:31:21.683 [2024-11-20 11:08:10.688540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.717557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.717605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:21.683 [2024-11-20 11:08:10.717640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.022 ms 00:31:21.683 [2024-11-20 11:08:10.717651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.736162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.736208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:21.683 [2024-11-20 11:08:10.736221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.500 ms 00:31:21.683 [2024-11-20 11:08:10.736231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.753775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.753811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:21.683 [2024-11-20 11:08:10.753834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.536 ms 00:31:21.683 [2024-11-20 11:08:10.753843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.754572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.754590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:21.683 [2024-11-20 11:08:10.754618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:31:21.683 [2024-11-20 11:08:10.754643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.837501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.837560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:21.683 [2024-11-20 11:08:10.837576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.967 ms 00:31:21.683 [2024-11-20 11:08:10.837604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.847754] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:21.683 [2024-11-20 11:08:10.850007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.850034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:21.683 [2024-11-20 11:08:10.850047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.363 ms 00:31:21.683 [2024-11-20 11:08:10.850057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.850128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.850140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:21.683 [2024-11-20 11:08:10.850150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:21.683 [2024-11-20 11:08:10.850160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.850226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.850238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:21.683 [2024-11-20 11:08:10.850248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:21.683 [2024-11-20 11:08:10.850257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.850276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.850287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:21.683 [2024-11-20 11:08:10.850297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:21.683 [2024-11-20 11:08:10.850306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.850339] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:21.683 [2024-11-20 11:08:10.850350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.850362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:21.683 [2024-11-20 11:08:10.850372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:21.683 [2024-11-20 11:08:10.850381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.884477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.884515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:21.683 [2024-11-20 11:08:10.884528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.133 ms 00:31:21.683 [2024-11-20 11:08:10.884554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.884652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:21.683 [2024-11-20 11:08:10.884665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:21.683 [2024-11-20 11:08:10.884676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:21.683 [2024-11-20 11:08:10.884685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:21.683 [2024-11-20 11:08:10.885747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 361.900 ms, result 0 00:31:23.062  [2024-11-20T11:08:13.252Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-20T11:08:14.189Z] Copying: 46/1024 [MB] (24 MBps) [2024-11-20T11:08:15.126Z] Copying: 70/1024 [MB] (24 MBps) [2024-11-20T11:08:16.064Z] Copying: 93/1024 [MB] (23 MBps) [2024-11-20T11:08:17.001Z] Copying: 117/1024 [MB] (24 MBps) [2024-11-20T11:08:17.938Z] Copying: 141/1024 [MB] (23 MBps) [2024-11-20T11:08:19.315Z] Copying: 164/1024 [MB] (22 MBps) [2024-11-20T11:08:20.252Z] Copying: 188/1024 [MB] (24 MBps) [2024-11-20T11:08:21.191Z] Copying: 209/1024 [MB] (20 MBps) [2024-11-20T11:08:22.129Z] Copying: 231/1024 [MB] (22 MBps) [2024-11-20T11:08:23.188Z] Copying: 255/1024 [MB] (23 MBps) [2024-11-20T11:08:24.127Z] Copying: 278/1024 [MB] (23 MBps) [2024-11-20T11:08:25.065Z] Copying: 302/1024 [MB] (23 MBps) [2024-11-20T11:08:26.001Z] Copying: 325/1024 [MB] (23 MBps) [2024-11-20T11:08:26.937Z] Copying: 350/1024 [MB] (24 MBps) [2024-11-20T11:08:27.872Z] Copying: 374/1024 [MB] (24 MBps) [2024-11-20T11:08:29.250Z] Copying: 397/1024 [MB] (23 MBps) [2024-11-20T11:08:30.185Z] Copying: 420/1024 [MB] (23 MBps) [2024-11-20T11:08:31.123Z] Copying: 445/1024 [MB] (25 MBps) [2024-11-20T11:08:32.061Z] Copying: 471/1024 [MB] (25 MBps) [2024-11-20T11:08:33.000Z] Copying: 496/1024 [MB] (24 MBps) [2024-11-20T11:08:33.937Z] Copying: 521/1024 [MB] (25 MBps) [2024-11-20T11:08:35.034Z] Copying: 546/1024 [MB] (24 MBps) [2024-11-20T11:08:35.972Z] Copying: 571/1024 [MB] (24 MBps) [2024-11-20T11:08:36.908Z] Copying: 595/1024 [MB] (24 MBps) [2024-11-20T11:08:38.286Z] Copying: 619/1024 [MB] (24 MBps) [2024-11-20T11:08:39.220Z] Copying: 644/1024 [MB] (24 MBps) [2024-11-20T11:08:40.155Z] Copying: 669/1024 [MB] (24 MBps) [2024-11-20T11:08:41.091Z] Copying: 694/1024 [MB] (25 MBps) [2024-11-20T11:08:42.027Z] Copying: 720/1024 [MB] (25 MBps) [2024-11-20T11:08:42.961Z] Copying: 745/1024 [MB] (25 MBps) [2024-11-20T11:08:43.898Z] Copying: 770/1024 [MB] (25 MBps) [2024-11-20T11:08:45.275Z] Copying: 795/1024 [MB] (25 MBps) [2024-11-20T11:08:45.842Z] Copying: 820/1024 [MB] (24 MBps) [2024-11-20T11:08:47.240Z] Copying: 845/1024 [MB] (24 MBps) [2024-11-20T11:08:48.174Z] Copying: 870/1024 [MB] (25 MBps) [2024-11-20T11:08:49.118Z] Copying: 895/1024 [MB] (25 MBps) [2024-11-20T11:08:50.052Z] Copying: 920/1024 [MB] (25 MBps) [2024-11-20T11:08:50.988Z] Copying: 945/1024 [MB] (24 MBps) [2024-11-20T11:08:51.927Z] Copying: 970/1024 [MB] (25 MBps) [2024-11-20T11:08:52.865Z] Copying: 995/1024 [MB] (24 MBps) [2024-11-20T11:08:53.126Z] Copying: 1020/1024 [MB] (25 MBps) [2024-11-20T11:08:53.126Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 11:08:52.956452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.873 [2024-11-20 11:08:52.956498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:03.873 [2024-11-20 11:08:52.956513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:03.874 [2024-11-20 11:08:52.956524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.874 [2024-11-20 11:08:52.956545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:03.874 [2024-11-20 11:08:52.960623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.874 [2024-11-20 11:08:52.960654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:03.874 [2024-11-20 11:08:52.960665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.068 ms 00:32:03.874 [2024-11-20 11:08:52.960676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.874 [2024-11-20 11:08:52.962446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.874 [2024-11-20 11:08:52.962483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:03.874 [2024-11-20 11:08:52.962495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.743 ms 00:32:03.874 [2024-11-20 11:08:52.962514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.874 [2024-11-20 11:08:52.962546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.874 [2024-11-20 11:08:52.962558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:03.874 [2024-11-20 11:08:52.962568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:03.874 [2024-11-20 11:08:52.962578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.874 [2024-11-20 11:08:52.962635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.874 [2024-11-20 11:08:52.962650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:03.874 [2024-11-20 11:08:52.962660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:03.874 [2024-11-20 11:08:52.962670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.874 [2024-11-20 11:08:52.962685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:03.874 [2024-11-20 11:08:52.962698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.962993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:03.874 [2024-11-20 11:08:52.963451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.963987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.964951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:03.875 [2024-11-20 11:08:52.965037] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:03.875 [2024-11-20 11:08:52.965131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b367e70-c7da-47b1-b21d-8a8452023d94 00:32:03.875 [2024-11-20 11:08:52.965147] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:03.875 [2024-11-20 11:08:52.965157] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:03.875 [2024-11-20 11:08:52.965167] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:03.875 [2024-11-20 11:08:52.965177] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:03.875 [2024-11-20 11:08:52.965192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:03.875 [2024-11-20 11:08:52.965202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:03.875 [2024-11-20 11:08:52.965211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:03.875 [2024-11-20 11:08:52.965220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:03.875 [2024-11-20 11:08:52.965229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:03.875 [2024-11-20 11:08:52.965240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.875 [2024-11-20 11:08:52.965253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:03.875 [2024-11-20 11:08:52.965265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.559 ms 00:32:03.875 [2024-11-20 11:08:52.965274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.875 [2024-11-20 11:08:52.984364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.875 [2024-11-20 11:08:52.984523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:03.875 [2024-11-20 11:08:52.984549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.097 ms 00:32:03.875 [2024-11-20 11:08:52.984559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.875 [2024-11-20 11:08:52.985128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.875 [2024-11-20 11:08:52.985143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:03.875 [2024-11-20 11:08:52.985154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:32:03.875 [2024-11-20 11:08:52.985163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.875 [2024-11-20 11:08:53.035328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.875 [2024-11-20 11:08:53.035368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:03.875 [2024-11-20 11:08:53.035381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.875 [2024-11-20 11:08:53.035391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.875 [2024-11-20 11:08:53.035441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.875 [2024-11-20 11:08:53.035452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:03.875 [2024-11-20 11:08:53.035461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.875 [2024-11-20 11:08:53.035471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.875 [2024-11-20 11:08:53.035539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.875 [2024-11-20 11:08:53.035552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:03.875 [2024-11-20 11:08:53.035567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.875 [2024-11-20 11:08:53.035576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.875 [2024-11-20 11:08:53.035603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:03.875 [2024-11-20 11:08:53.035614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:03.875 [2024-11-20 11:08:53.035624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:03.875 [2024-11-20 11:08:53.035638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.155386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.136 [2024-11-20 11:08:53.155582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.136 [2024-11-20 11:08:53.155696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.136 [2024-11-20 11:08:53.155735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.250970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.136 [2024-11-20 11:08:53.251147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.136 [2024-11-20 11:08:53.251279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.136 [2024-11-20 11:08:53.251316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.251424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.136 [2024-11-20 11:08:53.251459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:04.136 [2024-11-20 11:08:53.251540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.136 [2024-11-20 11:08:53.251582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.251677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.136 [2024-11-20 11:08:53.251712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:04.136 [2024-11-20 11:08:53.251742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.136 [2024-11-20 11:08:53.251866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.251969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.136 [2024-11-20 11:08:53.252006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:04.136 [2024-11-20 11:08:53.252083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.136 [2024-11-20 11:08:53.252165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.252245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.136 [2024-11-20 11:08:53.252318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:04.136 [2024-11-20 11:08:53.252355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.136 [2024-11-20 11:08:53.252424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.252489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.136 [2024-11-20 11:08:53.252522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:04.136 [2024-11-20 11:08:53.252584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.136 [2024-11-20 11:08:53.252641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.252697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:04.136 [2024-11-20 11:08:53.252709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:04.136 [2024-11-20 11:08:53.252720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:04.136 [2024-11-20 11:08:53.252730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.136 [2024-11-20 11:08:53.252844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 296.839 ms, result 0 00:32:05.514 00:32:05.514 00:32:05.514 11:08:54 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:05.774 [2024-11-20 11:08:54.771400] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:32:05.774 [2024-11-20 11:08:54.771516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84947 ] 00:32:05.774 [2024-11-20 11:08:54.951296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.033 [2024-11-20 11:08:55.062158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.292 [2024-11-20 11:08:55.396786] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:06.292 [2024-11-20 11:08:55.397042] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:06.552 [2024-11-20 11:08:55.556750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.556951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:06.552 [2024-11-20 11:08:55.556984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:06.552 [2024-11-20 11:08:55.556996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.552 [2024-11-20 11:08:55.557052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.557065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:06.552 [2024-11-20 11:08:55.557080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:06.552 [2024-11-20 11:08:55.557090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.552 [2024-11-20 11:08:55.557113] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:06.552 [2024-11-20 11:08:55.558142] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:06.552 [2024-11-20 11:08:55.558172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.558183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:06.552 [2024-11-20 11:08:55.558194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:32:06.552 [2024-11-20 11:08:55.558205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.552 [2024-11-20 11:08:55.558553] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:06.552 [2024-11-20 11:08:55.558580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.558591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:06.552 [2024-11-20 11:08:55.558622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:06.552 [2024-11-20 11:08:55.558632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.552 [2024-11-20 11:08:55.558686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.558699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:06.552 [2024-11-20 11:08:55.558709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:06.552 [2024-11-20 11:08:55.558720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.552 [2024-11-20 11:08:55.559151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.559259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:06.552 [2024-11-20 11:08:55.559274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:32:06.552 [2024-11-20 11:08:55.559284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.552 [2024-11-20 11:08:55.559361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.559374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:06.552 [2024-11-20 11:08:55.559385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:32:06.552 [2024-11-20 11:08:55.559395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.552 [2024-11-20 11:08:55.559419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.559430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:06.552 [2024-11-20 11:08:55.559441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:06.552 [2024-11-20 11:08:55.559455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.552 [2024-11-20 11:08:55.559477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:06.552 [2024-11-20 11:08:55.564586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.552 [2024-11-20 11:08:55.564630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:06.552 [2024-11-20 11:08:55.564643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.121 ms 00:32:06.553 [2024-11-20 11:08:55.564669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.553 [2024-11-20 11:08:55.564698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.553 [2024-11-20 11:08:55.564710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:06.553 [2024-11-20 11:08:55.564720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:06.553 [2024-11-20 11:08:55.564730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.553 [2024-11-20 11:08:55.564792] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:06.553 [2024-11-20 11:08:55.564818] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:06.553 [2024-11-20 11:08:55.564856] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:06.553 [2024-11-20 11:08:55.564873] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:06.553 [2024-11-20 11:08:55.564966] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:06.553 [2024-11-20 11:08:55.564980] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:06.553 [2024-11-20 11:08:55.564993] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:06.553 [2024-11-20 11:08:55.565007] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565019] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565030] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:06.553 [2024-11-20 11:08:55.565044] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:06.553 [2024-11-20 11:08:55.565055] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:06.553 [2024-11-20 11:08:55.565065] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:06.553 [2024-11-20 11:08:55.565075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.553 [2024-11-20 11:08:55.565085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:06.553 [2024-11-20 11:08:55.565096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:32:06.553 [2024-11-20 11:08:55.565106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.553 [2024-11-20 11:08:55.565180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.553 [2024-11-20 11:08:55.565191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:06.553 [2024-11-20 11:08:55.565201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:32:06.553 [2024-11-20 11:08:55.565215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.553 [2024-11-20 11:08:55.565309] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:06.553 [2024-11-20 11:08:55.565323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:06.553 [2024-11-20 11:08:55.565336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:06.553 [2024-11-20 11:08:55.565367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:06.553 [2024-11-20 11:08:55.565396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:06.553 [2024-11-20 11:08:55.565416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:06.553 [2024-11-20 11:08:55.565427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:06.553 [2024-11-20 11:08:55.565437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:06.553 [2024-11-20 11:08:55.565447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:06.553 [2024-11-20 11:08:55.565456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:06.553 [2024-11-20 11:08:55.565466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:06.553 [2024-11-20 11:08:55.565494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:06.553 [2024-11-20 11:08:55.565524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:06.553 [2024-11-20 11:08:55.565552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:06.553 [2024-11-20 11:08:55.565580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:06.553 [2024-11-20 11:08:55.565621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:06.553 [2024-11-20 11:08:55.565650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:06.553 [2024-11-20 11:08:55.565668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:06.553 [2024-11-20 11:08:55.565678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:06.553 [2024-11-20 11:08:55.565687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:06.553 [2024-11-20 11:08:55.565697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:06.553 [2024-11-20 11:08:55.565708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:06.553 [2024-11-20 11:08:55.565717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:06.553 [2024-11-20 11:08:55.565736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:06.553 [2024-11-20 11:08:55.565746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565755] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:06.553 [2024-11-20 11:08:55.565770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:06.553 [2024-11-20 11:08:55.565780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:06.553 [2024-11-20 11:08:55.565800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:06.553 [2024-11-20 11:08:55.565810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:06.553 [2024-11-20 11:08:55.565820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:06.553 [2024-11-20 11:08:55.565829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:06.553 [2024-11-20 11:08:55.565839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:06.553 [2024-11-20 11:08:55.565848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:06.553 [2024-11-20 11:08:55.565859] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:06.553 [2024-11-20 11:08:55.565875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:06.553 [2024-11-20 11:08:55.565886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:06.553 [2024-11-20 11:08:55.565897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:06.553 [2024-11-20 11:08:55.565908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:06.553 [2024-11-20 11:08:55.565919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:06.554 [2024-11-20 11:08:55.565929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:06.554 [2024-11-20 11:08:55.565939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:06.554 [2024-11-20 11:08:55.565950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:06.554 [2024-11-20 11:08:55.565960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:06.554 [2024-11-20 11:08:55.565971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:06.554 [2024-11-20 11:08:55.565981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:06.554 [2024-11-20 11:08:55.565992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:06.554 [2024-11-20 11:08:55.566003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:06.554 [2024-11-20 11:08:55.566013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:06.554 [2024-11-20 11:08:55.566024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:06.554 [2024-11-20 11:08:55.566034] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:06.554 [2024-11-20 11:08:55.566046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:06.554 [2024-11-20 11:08:55.566057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:06.554 [2024-11-20 11:08:55.566069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:06.554 [2024-11-20 11:08:55.566079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:06.554 [2024-11-20 11:08:55.566090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:06.554 [2024-11-20 11:08:55.566101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.566112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:06.554 [2024-11-20 11:08:55.566122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:32:06.554 [2024-11-20 11:08:55.566133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.602704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.602745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:06.554 [2024-11-20 11:08:55.602774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.589 ms 00:32:06.554 [2024-11-20 11:08:55.602785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.602864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.602876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:06.554 [2024-11-20 11:08:55.602886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:06.554 [2024-11-20 11:08:55.602900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.671194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.671233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:06.554 [2024-11-20 11:08:55.671262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.347 ms 00:32:06.554 [2024-11-20 11:08:55.671273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.671310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.671321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:06.554 [2024-11-20 11:08:55.671332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:06.554 [2024-11-20 11:08:55.671342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.671463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.671475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:06.554 [2024-11-20 11:08:55.671486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:06.554 [2024-11-20 11:08:55.671496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.671636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.671652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:06.554 [2024-11-20 11:08:55.671663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:32:06.554 [2024-11-20 11:08:55.671672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.690457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.690492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:06.554 [2024-11-20 11:08:55.690528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.795 ms 00:32:06.554 [2024-11-20 11:08:55.690539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.690665] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:06.554 [2024-11-20 11:08:55.690681] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:06.554 [2024-11-20 11:08:55.690692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.690706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:06.554 [2024-11-20 11:08:55.690716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:32:06.554 [2024-11-20 11:08:55.690726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.701605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.701639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:06.554 [2024-11-20 11:08:55.701652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.879 ms 00:32:06.554 [2024-11-20 11:08:55.701662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.701765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.701776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:06.554 [2024-11-20 11:08:55.701786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:32:06.554 [2024-11-20 11:08:55.701816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.701864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.701876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:06.554 [2024-11-20 11:08:55.701886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:06.554 [2024-11-20 11:08:55.701895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.702569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.702585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:06.554 [2024-11-20 11:08:55.702608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:32:06.554 [2024-11-20 11:08:55.702619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.702638] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:06.554 [2024-11-20 11:08:55.702655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.702666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:06.554 [2024-11-20 11:08:55.702676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:06.554 [2024-11-20 11:08:55.702685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.714929] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:06.554 [2024-11-20 11:08:55.715116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.715129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:06.554 [2024-11-20 11:08:55.715141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.430 ms 00:32:06.554 [2024-11-20 11:08:55.715151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.716938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.716971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:06.554 [2024-11-20 11:08:55.716983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.769 ms 00:32:06.554 [2024-11-20 11:08:55.716992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.554 [2024-11-20 11:08:55.717078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.554 [2024-11-20 11:08:55.717091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:06.554 [2024-11-20 11:08:55.717101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:06.554 [2024-11-20 11:08:55.717110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.555 [2024-11-20 11:08:55.717135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.555 [2024-11-20 11:08:55.717145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:06.555 [2024-11-20 11:08:55.717159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:06.555 [2024-11-20 11:08:55.717170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.555 [2024-11-20 11:08:55.717198] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:06.555 [2024-11-20 11:08:55.717209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.555 [2024-11-20 11:08:55.717219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:06.555 [2024-11-20 11:08:55.717229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:06.555 [2024-11-20 11:08:55.717239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.555 [2024-11-20 11:08:55.754223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.555 [2024-11-20 11:08:55.754280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:06.555 [2024-11-20 11:08:55.754311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.023 ms 00:32:06.555 [2024-11-20 11:08:55.754322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.555 [2024-11-20 11:08:55.754395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:06.555 [2024-11-20 11:08:55.754407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:06.555 [2024-11-20 11:08:55.754419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:06.555 [2024-11-20 11:08:55.754428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:06.555 [2024-11-20 11:08:55.755437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 198.593 ms, result 0 00:32:07.933  [2024-11-20T11:08:58.123Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T11:08:59.061Z] Copying: 53/1024 [MB] (27 MBps) [2024-11-20T11:08:59.996Z] Copying: 80/1024 [MB] (27 MBps) [2024-11-20T11:09:01.473Z] Copying: 107/1024 [MB] (26 MBps) [2024-11-20T11:09:02.063Z] Copying: 134/1024 [MB] (26 MBps) [2024-11-20T11:09:03.000Z] Copying: 160/1024 [MB] (26 MBps) [2024-11-20T11:09:04.374Z] Copying: 187/1024 [MB] (26 MBps) [2024-11-20T11:09:05.307Z] Copying: 213/1024 [MB] (26 MBps) [2024-11-20T11:09:06.239Z] Copying: 240/1024 [MB] (26 MBps) [2024-11-20T11:09:07.171Z] Copying: 266/1024 [MB] (26 MBps) [2024-11-20T11:09:08.102Z] Copying: 293/1024 [MB] (26 MBps) [2024-11-20T11:09:09.033Z] Copying: 320/1024 [MB] (26 MBps) [2024-11-20T11:09:09.966Z] Copying: 347/1024 [MB] (27 MBps) [2024-11-20T11:09:11.342Z] Copying: 375/1024 [MB] (27 MBps) [2024-11-20T11:09:12.277Z] Copying: 402/1024 [MB] (27 MBps) [2024-11-20T11:09:13.211Z] Copying: 428/1024 [MB] (26 MBps) [2024-11-20T11:09:14.147Z] Copying: 454/1024 [MB] (26 MBps) [2024-11-20T11:09:15.081Z] Copying: 480/1024 [MB] (25 MBps) [2024-11-20T11:09:16.081Z] Copying: 506/1024 [MB] (25 MBps) [2024-11-20T11:09:17.017Z] Copying: 532/1024 [MB] (26 MBps) [2024-11-20T11:09:17.953Z] Copying: 557/1024 [MB] (24 MBps) [2024-11-20T11:09:19.343Z] Copying: 583/1024 [MB] (25 MBps) [2024-11-20T11:09:20.281Z] Copying: 609/1024 [MB] (25 MBps) [2024-11-20T11:09:21.218Z] Copying: 635/1024 [MB] (25 MBps) [2024-11-20T11:09:22.153Z] Copying: 661/1024 [MB] (26 MBps) [2024-11-20T11:09:23.087Z] Copying: 688/1024 [MB] (26 MBps) [2024-11-20T11:09:24.025Z] Copying: 714/1024 [MB] (26 MBps) [2024-11-20T11:09:24.964Z] Copying: 740/1024 [MB] (26 MBps) [2024-11-20T11:09:26.345Z] Copying: 765/1024 [MB] (25 MBps) [2024-11-20T11:09:27.282Z] Copying: 792/1024 [MB] (26 MBps) [2024-11-20T11:09:28.220Z] Copying: 818/1024 [MB] (26 MBps) [2024-11-20T11:09:29.158Z] Copying: 844/1024 [MB] (26 MBps) [2024-11-20T11:09:30.097Z] Copying: 870/1024 [MB] (26 MBps) [2024-11-20T11:09:31.034Z] Copying: 896/1024 [MB] (26 MBps) [2024-11-20T11:09:31.968Z] Copying: 923/1024 [MB] (26 MBps) [2024-11-20T11:09:32.920Z] Copying: 949/1024 [MB] (26 MBps) [2024-11-20T11:09:34.315Z] Copying: 976/1024 [MB] (26 MBps) [2024-11-20T11:09:34.883Z] Copying: 1002/1024 [MB] (26 MBps) [2024-11-20T11:09:34.883Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 11:09:34.859739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.630 [2024-11-20 11:09:34.859833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:45.630 [2024-11-20 11:09:34.859865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:45.630 [2024-11-20 11:09:34.859886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.630 [2024-11-20 11:09:34.859929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:45.630 [2024-11-20 11:09:34.868167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.630 [2024-11-20 11:09:34.868236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:45.630 [2024-11-20 11:09:34.868274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.214 ms 00:32:45.630 [2024-11-20 11:09:34.868307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.630 [2024-11-20 11:09:34.868791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.630 [2024-11-20 11:09:34.868830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:45.630 [2024-11-20 11:09:34.868865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:32:45.630 [2024-11-20 11:09:34.868899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.630 [2024-11-20 11:09:34.868973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.630 [2024-11-20 11:09:34.869018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:45.630 [2024-11-20 11:09:34.869053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:45.630 [2024-11-20 11:09:34.869084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.630 [2024-11-20 11:09:34.869198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.630 [2024-11-20 11:09:34.869235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:45.630 [2024-11-20 11:09:34.869269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:45.630 [2024-11-20 11:09:34.869301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.630 [2024-11-20 11:09:34.869348] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:45.630 [2024-11-20 11:09:34.869390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.869983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:45.630 [2024-11-20 11:09:34.870949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.870974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.870998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:45.631 [2024-11-20 11:09:34.871851] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:45.631 [2024-11-20 11:09:34.871865] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b367e70-c7da-47b1-b21d-8a8452023d94 00:32:45.631 [2024-11-20 11:09:34.871885] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:45.631 [2024-11-20 11:09:34.871898] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:45.631 [2024-11-20 11:09:34.871911] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:45.631 [2024-11-20 11:09:34.871925] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:45.631 [2024-11-20 11:09:34.871939] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:45.631 [2024-11-20 11:09:34.871952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:45.631 [2024-11-20 11:09:34.871966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:45.631 [2024-11-20 11:09:34.871978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:45.631 [2024-11-20 11:09:34.871991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:45.631 [2024-11-20 11:09:34.872005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.631 [2024-11-20 11:09:34.872019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:45.631 [2024-11-20 11:09:34.872034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.663 ms 00:32:45.631 [2024-11-20 11:09:34.872057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.890 [2024-11-20 11:09:34.893464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.890 [2024-11-20 11:09:34.893495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:45.890 [2024-11-20 11:09:34.893509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.361 ms 00:32:45.890 [2024-11-20 11:09:34.893519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.890 [2024-11-20 11:09:34.894190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:45.890 [2024-11-20 11:09:34.894276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:45.890 [2024-11-20 11:09:34.894294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:32:45.890 [2024-11-20 11:09:34.894311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.890 [2024-11-20 11:09:34.944575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.890 [2024-11-20 11:09:34.944616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:45.890 [2024-11-20 11:09:34.944630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.890 [2024-11-20 11:09:34.944641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.890 [2024-11-20 11:09:34.944694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.890 [2024-11-20 11:09:34.944705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:45.890 [2024-11-20 11:09:34.944715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.890 [2024-11-20 11:09:34.944730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.890 [2024-11-20 11:09:34.944791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.890 [2024-11-20 11:09:34.944806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:45.890 [2024-11-20 11:09:34.944816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.890 [2024-11-20 11:09:34.944826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.890 [2024-11-20 11:09:34.944842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.890 [2024-11-20 11:09:34.944853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:45.890 [2024-11-20 11:09:34.944864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.890 [2024-11-20 11:09:34.944874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:45.890 [2024-11-20 11:09:35.063824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:45.890 [2024-11-20 11:09:35.063887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:45.890 [2024-11-20 11:09:35.063902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:45.890 [2024-11-20 11:09:35.063913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.148 [2024-11-20 11:09:35.160687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.148 [2024-11-20 11:09:35.160732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:46.148 [2024-11-20 11:09:35.160745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.148 [2024-11-20 11:09:35.160755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.148 [2024-11-20 11:09:35.160845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.148 [2024-11-20 11:09:35.160856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:46.148 [2024-11-20 11:09:35.160866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.148 [2024-11-20 11:09:35.160875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.148 [2024-11-20 11:09:35.160914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.148 [2024-11-20 11:09:35.160924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:46.148 [2024-11-20 11:09:35.160934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.148 [2024-11-20 11:09:35.160943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.148 [2024-11-20 11:09:35.161035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.148 [2024-11-20 11:09:35.161047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:46.148 [2024-11-20 11:09:35.161056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.148 [2024-11-20 11:09:35.161066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.148 [2024-11-20 11:09:35.161092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.148 [2024-11-20 11:09:35.161103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:46.148 [2024-11-20 11:09:35.161113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.148 [2024-11-20 11:09:35.161122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.148 [2024-11-20 11:09:35.161156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.148 [2024-11-20 11:09:35.161169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:46.148 [2024-11-20 11:09:35.161179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.148 [2024-11-20 11:09:35.161188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.148 [2024-11-20 11:09:35.161227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:46.148 [2024-11-20 11:09:35.161237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:46.148 [2024-11-20 11:09:35.161254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:46.148 [2024-11-20 11:09:35.161263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:46.148 [2024-11-20 11:09:35.161370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 302.106 ms, result 0 00:32:47.083 00:32:47.083 00:32:47.083 11:09:36 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:48.986 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:48.986 11:09:37 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:48.986 [2024-11-20 11:09:37.898132] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:32:48.986 [2024-11-20 11:09:37.898241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85373 ] 00:32:48.986 [2024-11-20 11:09:38.073590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.986 [2024-11-20 11:09:38.181220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.556 [2024-11-20 11:09:38.513666] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:49.556 [2024-11-20 11:09:38.513726] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:49.556 [2024-11-20 11:09:38.673288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.673336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:49.556 [2024-11-20 11:09:38.673372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:49.556 [2024-11-20 11:09:38.673382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.673427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.673439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:49.556 [2024-11-20 11:09:38.673453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:49.556 [2024-11-20 11:09:38.673462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.673482] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:49.556 [2024-11-20 11:09:38.674470] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:49.556 [2024-11-20 11:09:38.674498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.674519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:49.556 [2024-11-20 11:09:38.674530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:32:49.556 [2024-11-20 11:09:38.674540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.674869] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:49.556 [2024-11-20 11:09:38.674890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.674900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:49.556 [2024-11-20 11:09:38.674915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:49.556 [2024-11-20 11:09:38.674926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.674970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.674981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:49.556 [2024-11-20 11:09:38.674991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:49.556 [2024-11-20 11:09:38.675000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.675423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.675439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:49.556 [2024-11-20 11:09:38.675449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:32:49.556 [2024-11-20 11:09:38.675458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.675525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.675538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:49.556 [2024-11-20 11:09:38.675548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:49.556 [2024-11-20 11:09:38.675557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.675579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.675589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:49.556 [2024-11-20 11:09:38.675598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:49.556 [2024-11-20 11:09:38.675624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.675645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:49.556 [2024-11-20 11:09:38.680360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.680492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:49.556 [2024-11-20 11:09:38.680658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:32:49.556 [2024-11-20 11:09:38.680696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.680749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.680781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:49.556 [2024-11-20 11:09:38.680811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:49.556 [2024-11-20 11:09:38.680890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.680972] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:49.556 [2024-11-20 11:09:38.681027] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:49.556 [2024-11-20 11:09:38.681109] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:49.556 [2024-11-20 11:09:38.681280] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:49.556 [2024-11-20 11:09:38.681403] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:49.556 [2024-11-20 11:09:38.681513] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:49.556 [2024-11-20 11:09:38.681699] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:49.556 [2024-11-20 11:09:38.681752] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:49.556 [2024-11-20 11:09:38.681801] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:49.556 [2024-11-20 11:09:38.681848] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:49.556 [2024-11-20 11:09:38.681940] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:49.556 [2024-11-20 11:09:38.681974] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:49.556 [2024-11-20 11:09:38.682003] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:49.556 [2024-11-20 11:09:38.682034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.682063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:49.556 [2024-11-20 11:09:38.682093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:32:49.556 [2024-11-20 11:09:38.682167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.682273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.556 [2024-11-20 11:09:38.682306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:49.556 [2024-11-20 11:09:38.682336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:49.556 [2024-11-20 11:09:38.682451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.556 [2024-11-20 11:09:38.682576] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:49.556 [2024-11-20 11:09:38.682674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:49.556 [2024-11-20 11:09:38.682711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:49.556 [2024-11-20 11:09:38.682742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.556 [2024-11-20 11:09:38.682852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:49.556 [2024-11-20 11:09:38.682925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:49.556 [2024-11-20 11:09:38.682954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:49.556 [2024-11-20 11:09:38.682983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:49.556 [2024-11-20 11:09:38.683011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:49.556 [2024-11-20 11:09:38.683040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:49.556 [2024-11-20 11:09:38.683068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:49.556 [2024-11-20 11:09:38.683098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:49.556 [2024-11-20 11:09:38.683127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:49.556 [2024-11-20 11:09:38.683204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:49.556 [2024-11-20 11:09:38.683238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:49.556 [2024-11-20 11:09:38.683267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.556 [2024-11-20 11:09:38.683295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:49.556 [2024-11-20 11:09:38.683343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:49.557 [2024-11-20 11:09:38.683372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:49.557 [2024-11-20 11:09:38.683493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.557 [2024-11-20 11:09:38.683549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:49.557 [2024-11-20 11:09:38.683578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.557 [2024-11-20 11:09:38.683615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:49.557 [2024-11-20 11:09:38.683624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.557 [2024-11-20 11:09:38.683643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:49.557 [2024-11-20 11:09:38.683652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:49.557 [2024-11-20 11:09:38.683670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:49.557 [2024-11-20 11:09:38.683679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:49.557 [2024-11-20 11:09:38.683699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:49.557 [2024-11-20 11:09:38.683709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:49.557 [2024-11-20 11:09:38.683718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:49.557 [2024-11-20 11:09:38.683727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:49.557 [2024-11-20 11:09:38.683736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:49.557 [2024-11-20 11:09:38.683745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:49.557 [2024-11-20 11:09:38.683763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:49.557 [2024-11-20 11:09:38.683772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:49.557 [2024-11-20 11:09:38.683791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:49.557 [2024-11-20 11:09:38.683801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:49.557 [2024-11-20 11:09:38.683810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:49.557 [2024-11-20 11:09:38.683820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:49.557 [2024-11-20 11:09:38.683829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:49.557 [2024-11-20 11:09:38.683838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:49.557 [2024-11-20 11:09:38.683848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:49.557 [2024-11-20 11:09:38.683856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:49.557 [2024-11-20 11:09:38.683865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:49.557 [2024-11-20 11:09:38.683876] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:49.557 [2024-11-20 11:09:38.683898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:49.557 [2024-11-20 11:09:38.683909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:49.557 [2024-11-20 11:09:38.683920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:49.557 [2024-11-20 11:09:38.683930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:49.557 [2024-11-20 11:09:38.683941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:49.557 [2024-11-20 11:09:38.683951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:49.557 [2024-11-20 11:09:38.683961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:49.557 [2024-11-20 11:09:38.683971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:49.557 [2024-11-20 11:09:38.683981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:49.557 [2024-11-20 11:09:38.683991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:49.557 [2024-11-20 11:09:38.684002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:49.557 [2024-11-20 11:09:38.684012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:49.557 [2024-11-20 11:09:38.684022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:49.557 [2024-11-20 11:09:38.684032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:49.557 [2024-11-20 11:09:38.684042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:49.557 [2024-11-20 11:09:38.684052] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:49.557 [2024-11-20 11:09:38.684063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:49.557 [2024-11-20 11:09:38.684074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:49.557 [2024-11-20 11:09:38.684084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:49.557 [2024-11-20 11:09:38.684094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:49.557 [2024-11-20 11:09:38.684104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:49.557 [2024-11-20 11:09:38.684116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.684125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:49.557 [2024-11-20 11:09:38.684135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.598 ms 00:32:49.557 [2024-11-20 11:09:38.684148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.718055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.718092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:49.557 [2024-11-20 11:09:38.718104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.911 ms 00:32:49.557 [2024-11-20 11:09:38.718114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.718181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.718192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:49.557 [2024-11-20 11:09:38.718202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:49.557 [2024-11-20 11:09:38.718215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.773022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.773058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:49.557 [2024-11-20 11:09:38.773070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.847 ms 00:32:49.557 [2024-11-20 11:09:38.773080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.773114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.773124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:49.557 [2024-11-20 11:09:38.773135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:49.557 [2024-11-20 11:09:38.773144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.773252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.773265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:49.557 [2024-11-20 11:09:38.773275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:32:49.557 [2024-11-20 11:09:38.773284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.773385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.773400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:49.557 [2024-11-20 11:09:38.773410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:32:49.557 [2024-11-20 11:09:38.773419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.791929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.791962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:49.557 [2024-11-20 11:09:38.791974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.520 ms 00:32:49.557 [2024-11-20 11:09:38.791983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.792094] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:49.557 [2024-11-20 11:09:38.792108] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:49.557 [2024-11-20 11:09:38.792120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.792132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:49.557 [2024-11-20 11:09:38.792142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:32:49.557 [2024-11-20 11:09:38.792152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.802617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.557 [2024-11-20 11:09:38.802768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:49.557 [2024-11-20 11:09:38.802788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.467 ms 00:32:49.557 [2024-11-20 11:09:38.802798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.557 [2024-11-20 11:09:38.802923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.558 [2024-11-20 11:09:38.802935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:49.558 [2024-11-20 11:09:38.802946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:32:49.558 [2024-11-20 11:09:38.802961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.558 [2024-11-20 11:09:38.803011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.558 [2024-11-20 11:09:38.803022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:49.558 [2024-11-20 11:09:38.803033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:49.558 [2024-11-20 11:09:38.803042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.558 [2024-11-20 11:09:38.803727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.558 [2024-11-20 11:09:38.803745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:49.558 [2024-11-20 11:09:38.803756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.636 ms 00:32:49.558 [2024-11-20 11:09:38.803766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.558 [2024-11-20 11:09:38.803784] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:49.558 [2024-11-20 11:09:38.803802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.558 [2024-11-20 11:09:38.803812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:49.558 [2024-11-20 11:09:38.803822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:32:49.558 [2024-11-20 11:09:38.803832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.818 [2024-11-20 11:09:38.815091] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:49.818 [2024-11-20 11:09:38.815291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.818 [2024-11-20 11:09:38.815305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:49.818 [2024-11-20 11:09:38.815316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.458 ms 00:32:49.818 [2024-11-20 11:09:38.815326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.818 [2024-11-20 11:09:38.817113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.818 [2024-11-20 11:09:38.817144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:49.818 [2024-11-20 11:09:38.817155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:32:49.818 [2024-11-20 11:09:38.817165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.818 [2024-11-20 11:09:38.817251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.818 [2024-11-20 11:09:38.817264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:49.818 [2024-11-20 11:09:38.817274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:49.818 [2024-11-20 11:09:38.817284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.818 [2024-11-20 11:09:38.817308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.818 [2024-11-20 11:09:38.817318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:49.818 [2024-11-20 11:09:38.817333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:49.818 [2024-11-20 11:09:38.817343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.818 [2024-11-20 11:09:38.817372] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:49.818 [2024-11-20 11:09:38.817384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.818 [2024-11-20 11:09:38.817394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:49.818 [2024-11-20 11:09:38.817404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:49.818 [2024-11-20 11:09:38.817413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.818 [2024-11-20 11:09:38.852045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.818 [2024-11-20 11:09:38.852086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:49.818 [2024-11-20 11:09:38.852099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.667 ms 00:32:49.818 [2024-11-20 11:09:38.852109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.818 [2024-11-20 11:09:38.852177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.818 [2024-11-20 11:09:38.852188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:49.818 [2024-11-20 11:09:38.852198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:49.818 [2024-11-20 11:09:38.852207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.818 [2024-11-20 11:09:38.853244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 179.837 ms, result 0 00:32:50.755  [2024-11-20T11:09:40.940Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T11:09:41.877Z] Copying: 50/1024 [MB] (25 MBps) [2024-11-20T11:09:43.250Z] Copying: 75/1024 [MB] (24 MBps) [2024-11-20T11:09:44.181Z] Copying: 100/1024 [MB] (25 MBps) [2024-11-20T11:09:45.115Z] Copying: 125/1024 [MB] (25 MBps) [2024-11-20T11:09:46.052Z] Copying: 150/1024 [MB] (25 MBps) [2024-11-20T11:09:46.983Z] Copying: 176/1024 [MB] (25 MBps) [2024-11-20T11:09:47.914Z] Copying: 200/1024 [MB] (24 MBps) [2024-11-20T11:09:49.291Z] Copying: 225/1024 [MB] (24 MBps) [2024-11-20T11:09:49.895Z] Copying: 252/1024 [MB] (27 MBps) [2024-11-20T11:09:51.268Z] Copying: 278/1024 [MB] (25 MBps) [2024-11-20T11:09:52.201Z] Copying: 304/1024 [MB] (26 MBps) [2024-11-20T11:09:53.170Z] Copying: 329/1024 [MB] (25 MBps) [2024-11-20T11:09:54.102Z] Copying: 354/1024 [MB] (24 MBps) [2024-11-20T11:09:55.096Z] Copying: 379/1024 [MB] (24 MBps) [2024-11-20T11:09:56.029Z] Copying: 404/1024 [MB] (25 MBps) [2024-11-20T11:09:56.964Z] Copying: 429/1024 [MB] (25 MBps) [2024-11-20T11:09:57.898Z] Copying: 454/1024 [MB] (25 MBps) [2024-11-20T11:09:58.834Z] Copying: 480/1024 [MB] (25 MBps) [2024-11-20T11:10:00.210Z] Copying: 505/1024 [MB] (25 MBps) [2024-11-20T11:10:01.143Z] Copying: 530/1024 [MB] (25 MBps) [2024-11-20T11:10:02.075Z] Copying: 555/1024 [MB] (24 MBps) [2024-11-20T11:10:03.009Z] Copying: 579/1024 [MB] (24 MBps) [2024-11-20T11:10:03.944Z] Copying: 604/1024 [MB] (24 MBps) [2024-11-20T11:10:04.878Z] Copying: 629/1024 [MB] (24 MBps) [2024-11-20T11:10:06.252Z] Copying: 654/1024 [MB] (24 MBps) [2024-11-20T11:10:07.185Z] Copying: 679/1024 [MB] (24 MBps) [2024-11-20T11:10:08.120Z] Copying: 704/1024 [MB] (24 MBps) [2024-11-20T11:10:09.055Z] Copying: 729/1024 [MB] (25 MBps) [2024-11-20T11:10:09.990Z] Copying: 753/1024 [MB] (24 MBps) [2024-11-20T11:10:10.992Z] Copying: 778/1024 [MB] (24 MBps) [2024-11-20T11:10:11.927Z] Copying: 803/1024 [MB] (25 MBps) [2024-11-20T11:10:12.864Z] Copying: 828/1024 [MB] (24 MBps) [2024-11-20T11:10:14.243Z] Copying: 853/1024 [MB] (24 MBps) [2024-11-20T11:10:14.811Z] Copying: 877/1024 [MB] (24 MBps) [2024-11-20T11:10:16.189Z] Copying: 900/1024 [MB] (22 MBps) [2024-11-20T11:10:17.125Z] Copying: 925/1024 [MB] (24 MBps) [2024-11-20T11:10:18.062Z] Copying: 949/1024 [MB] (24 MBps) [2024-11-20T11:10:18.999Z] Copying: 974/1024 [MB] (25 MBps) [2024-11-20T11:10:19.936Z] Copying: 999/1024 [MB] (24 MBps) [2024-11-20T11:10:20.875Z] Copying: 1023/1024 [MB] (23 MBps) [2024-11-20T11:10:20.875Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 11:10:20.614998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.622 [2024-11-20 11:10:20.615190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:31.622 [2024-11-20 11:10:20.615216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:31.622 [2024-11-20 11:10:20.615227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.622 [2024-11-20 11:10:20.617005] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:31.622 [2024-11-20 11:10:20.622971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.622 [2024-11-20 11:10:20.623006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:31.622 [2024-11-20 11:10:20.623019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.938 ms 00:33:31.622 [2024-11-20 11:10:20.623044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.622 [2024-11-20 11:10:20.630942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.622 [2024-11-20 11:10:20.630987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:31.622 [2024-11-20 11:10:20.631000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.531 ms 00:33:31.622 [2024-11-20 11:10:20.631011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.622 [2024-11-20 11:10:20.631038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.622 [2024-11-20 11:10:20.631050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:31.622 [2024-11-20 11:10:20.631071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:31.622 [2024-11-20 11:10:20.631080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.622 [2024-11-20 11:10:20.631125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.622 [2024-11-20 11:10:20.631136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:31.622 [2024-11-20 11:10:20.631149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:31.622 [2024-11-20 11:10:20.631158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.622 [2024-11-20 11:10:20.631173] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:31.622 [2024-11-20 11:10:20.631185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:33:31.622 [2024-11-20 11:10:20.631197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:31.622 [2024-11-20 11:10:20.631208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:31.622 [2024-11-20 11:10:20.631235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:31.622 [2024-11-20 11:10:20.631246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:31.622 [2024-11-20 11:10:20.631256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:31.622 [2024-11-20 11:10:20.631267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:31.622 [2024-11-20 11:10:20.631278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:31.622 [2024-11-20 11:10:20.631288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:31.622 [2024-11-20 11:10:20.631298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.631993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:31.623 [2024-11-20 11:10:20.632274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:31.624 [2024-11-20 11:10:20.632284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:31.624 [2024-11-20 11:10:20.632301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:31.624 [2024-11-20 11:10:20.632310] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b367e70-c7da-47b1-b21d-8a8452023d94 00:33:31.624 [2024-11-20 11:10:20.632321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:33:31.624 [2024-11-20 11:10:20.632330] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128800 00:33:31.624 [2024-11-20 11:10:20.632339] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:33:31.624 [2024-11-20 11:10:20.632349] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:31.624 [2024-11-20 11:10:20.632359] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:31.624 [2024-11-20 11:10:20.632368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:31.624 [2024-11-20 11:10:20.632382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:31.624 [2024-11-20 11:10:20.632391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:31.624 [2024-11-20 11:10:20.632400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:31.624 [2024-11-20 11:10:20.632409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.624 [2024-11-20 11:10:20.632418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:31.624 [2024-11-20 11:10:20.632428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:33:31.624 [2024-11-20 11:10:20.632438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.624 [2024-11-20 11:10:20.651485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.624 [2024-11-20 11:10:20.651520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:31.624 [2024-11-20 11:10:20.651533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.061 ms 00:33:31.624 [2024-11-20 11:10:20.651548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.624 [2024-11-20 11:10:20.652115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.624 [2024-11-20 11:10:20.652135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:31.624 [2024-11-20 11:10:20.652146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:33:31.624 [2024-11-20 11:10:20.652156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.624 [2024-11-20 11:10:20.699296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.624 [2024-11-20 11:10:20.699331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:31.624 [2024-11-20 11:10:20.699348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.624 [2024-11-20 11:10:20.699374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.624 [2024-11-20 11:10:20.699426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.624 [2024-11-20 11:10:20.699437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:31.624 [2024-11-20 11:10:20.699447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.624 [2024-11-20 11:10:20.699456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.624 [2024-11-20 11:10:20.699504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.624 [2024-11-20 11:10:20.699517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:31.624 [2024-11-20 11:10:20.699527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.624 [2024-11-20 11:10:20.699540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.624 [2024-11-20 11:10:20.699556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.624 [2024-11-20 11:10:20.699566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:31.624 [2024-11-20 11:10:20.699576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.624 [2024-11-20 11:10:20.699586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.624 [2024-11-20 11:10:20.814506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.624 [2024-11-20 11:10:20.814562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:31.624 [2024-11-20 11:10:20.814582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.624 [2024-11-20 11:10:20.814793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.884 [2024-11-20 11:10:20.907857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.884 [2024-11-20 11:10:20.908025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:31.884 [2024-11-20 11:10:20.908215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.884 [2024-11-20 11:10:20.908252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.884 [2024-11-20 11:10:20.908353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.884 [2024-11-20 11:10:20.908495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:31.884 [2024-11-20 11:10:20.908571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.884 [2024-11-20 11:10:20.908626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.884 [2024-11-20 11:10:20.908699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.884 [2024-11-20 11:10:20.908732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:31.884 [2024-11-20 11:10:20.908763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.884 [2024-11-20 11:10:20.908791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.884 [2024-11-20 11:10:20.908971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.884 [2024-11-20 11:10:20.909013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:31.884 [2024-11-20 11:10:20.909044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.884 [2024-11-20 11:10:20.909124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.884 [2024-11-20 11:10:20.909191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.884 [2024-11-20 11:10:20.909226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:31.884 [2024-11-20 11:10:20.909256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.884 [2024-11-20 11:10:20.909320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.884 [2024-11-20 11:10:20.909440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.884 [2024-11-20 11:10:20.909476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:31.884 [2024-11-20 11:10:20.909540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.884 [2024-11-20 11:10:20.909574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.884 [2024-11-20 11:10:20.909749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:31.884 [2024-11-20 11:10:20.909785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:31.884 [2024-11-20 11:10:20.909816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:31.884 [2024-11-20 11:10:20.909845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.884 [2024-11-20 11:10:20.909983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 296.559 ms, result 0 00:33:33.264 00:33:33.264 00:33:33.264 11:10:22 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:33.264 [2024-11-20 11:10:22.487362] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:33:33.264 [2024-11-20 11:10:22.487657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85809 ] 00:33:33.523 [2024-11-20 11:10:22.670675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.782 [2024-11-20 11:10:22.782845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.043 [2024-11-20 11:10:23.114983] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:34.043 [2024-11-20 11:10:23.115066] bdev.c:8282:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:34.043 [2024-11-20 11:10:23.274334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.274384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:34.043 [2024-11-20 11:10:23.274404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:34.043 [2024-11-20 11:10:23.274430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.274476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.274488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:34.043 [2024-11-20 11:10:23.274501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:33:34.043 [2024-11-20 11:10:23.274511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.274541] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:34.043 [2024-11-20 11:10:23.275449] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:34.043 [2024-11-20 11:10:23.275478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.275489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:34.043 [2024-11-20 11:10:23.275500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:33:34.043 [2024-11-20 11:10:23.275509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.275846] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:34.043 [2024-11-20 11:10:23.275873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.275884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:34.043 [2024-11-20 11:10:23.275899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:34.043 [2024-11-20 11:10:23.275910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.275954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.275965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:34.043 [2024-11-20 11:10:23.275975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:34.043 [2024-11-20 11:10:23.275984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.276432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.276450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:34.043 [2024-11-20 11:10:23.276460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:33:34.043 [2024-11-20 11:10:23.276470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.276539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.276552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:34.043 [2024-11-20 11:10:23.276562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:34.043 [2024-11-20 11:10:23.276572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.276614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.276626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:34.043 [2024-11-20 11:10:23.276640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:33:34.043 [2024-11-20 11:10:23.276650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.276672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:34.043 [2024-11-20 11:10:23.281269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.281300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:34.043 [2024-11-20 11:10:23.281311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.609 ms 00:33:34.043 [2024-11-20 11:10:23.281320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.281346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.281356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:34.043 [2024-11-20 11:10:23.281365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:34.043 [2024-11-20 11:10:23.281374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.281423] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:34.043 [2024-11-20 11:10:23.281445] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:34.043 [2024-11-20 11:10:23.281478] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:34.043 [2024-11-20 11:10:23.281494] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:34.043 [2024-11-20 11:10:23.281574] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:34.043 [2024-11-20 11:10:23.281586] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:34.043 [2024-11-20 11:10:23.281614] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:34.043 [2024-11-20 11:10:23.281626] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:34.043 [2024-11-20 11:10:23.281653] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:34.043 [2024-11-20 11:10:23.281668] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:34.043 [2024-11-20 11:10:23.281677] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:34.043 [2024-11-20 11:10:23.281687] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:34.043 [2024-11-20 11:10:23.281696] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:34.043 [2024-11-20 11:10:23.281706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.281731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:34.043 [2024-11-20 11:10:23.281741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:33:34.043 [2024-11-20 11:10:23.281750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.281818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.043 [2024-11-20 11:10:23.281828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:34.043 [2024-11-20 11:10:23.281838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:34.043 [2024-11-20 11:10:23.281851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.043 [2024-11-20 11:10:23.281939] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:34.043 [2024-11-20 11:10:23.281953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:34.043 [2024-11-20 11:10:23.281963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:34.043 [2024-11-20 11:10:23.281973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.043 [2024-11-20 11:10:23.281982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:34.043 [2024-11-20 11:10:23.281991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:34.043 [2024-11-20 11:10:23.282001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:34.043 [2024-11-20 11:10:23.282010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:34.043 [2024-11-20 11:10:23.282019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:34.043 [2024-11-20 11:10:23.282027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:34.044 [2024-11-20 11:10:23.282037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:34.044 [2024-11-20 11:10:23.282045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:34.044 [2024-11-20 11:10:23.282054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:34.044 [2024-11-20 11:10:23.282063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:34.044 [2024-11-20 11:10:23.282087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:34.044 [2024-11-20 11:10:23.282097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:34.044 [2024-11-20 11:10:23.282124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:34.044 [2024-11-20 11:10:23.282133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:34.044 [2024-11-20 11:10:23.282151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.044 [2024-11-20 11:10:23.282169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:34.044 [2024-11-20 11:10:23.282178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.044 [2024-11-20 11:10:23.282196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:34.044 [2024-11-20 11:10:23.282206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.044 [2024-11-20 11:10:23.282223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:34.044 [2024-11-20 11:10:23.282233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:34.044 [2024-11-20 11:10:23.282250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:34.044 [2024-11-20 11:10:23.282259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:34.044 [2024-11-20 11:10:23.282277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:34.044 [2024-11-20 11:10:23.282287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:34.044 [2024-11-20 11:10:23.282295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:34.044 [2024-11-20 11:10:23.282304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:34.044 [2024-11-20 11:10:23.282313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:34.044 [2024-11-20 11:10:23.282322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:34.044 [2024-11-20 11:10:23.282339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:34.044 [2024-11-20 11:10:23.282349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282358] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:34.044 [2024-11-20 11:10:23.282367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:34.044 [2024-11-20 11:10:23.282377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:34.044 [2024-11-20 11:10:23.282387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:34.044 [2024-11-20 11:10:23.282400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:34.044 [2024-11-20 11:10:23.282410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:34.044 [2024-11-20 11:10:23.282419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:34.044 [2024-11-20 11:10:23.282428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:34.044 [2024-11-20 11:10:23.282437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:34.044 [2024-11-20 11:10:23.282446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:34.044 [2024-11-20 11:10:23.282457] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:34.044 [2024-11-20 11:10:23.282468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:34.044 [2024-11-20 11:10:23.282480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:34.044 [2024-11-20 11:10:23.282490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:34.044 [2024-11-20 11:10:23.282500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:34.044 [2024-11-20 11:10:23.282510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:34.044 [2024-11-20 11:10:23.282529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:34.044 [2024-11-20 11:10:23.282540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:34.044 [2024-11-20 11:10:23.282550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:34.044 [2024-11-20 11:10:23.282560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:34.044 [2024-11-20 11:10:23.282570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:34.044 [2024-11-20 11:10:23.282580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:34.044 [2024-11-20 11:10:23.282591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:34.044 [2024-11-20 11:10:23.282612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:34.044 [2024-11-20 11:10:23.282622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:34.044 [2024-11-20 11:10:23.282632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:34.044 [2024-11-20 11:10:23.282642] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:34.044 [2024-11-20 11:10:23.282653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:34.044 [2024-11-20 11:10:23.282664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:34.044 [2024-11-20 11:10:23.282674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:34.044 [2024-11-20 11:10:23.282684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:34.044 [2024-11-20 11:10:23.282695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:34.044 [2024-11-20 11:10:23.282706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.044 [2024-11-20 11:10:23.282716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:34.044 [2024-11-20 11:10:23.282726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:33:34.044 [2024-11-20 11:10:23.282735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.304 [2024-11-20 11:10:23.316843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.304 [2024-11-20 11:10:23.317025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:34.304 [2024-11-20 11:10:23.317064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.122 ms 00:33:34.304 [2024-11-20 11:10:23.317075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.304 [2024-11-20 11:10:23.317153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.317166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:34.305 [2024-11-20 11:10:23.317183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:34.305 [2024-11-20 11:10:23.317193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.386573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.386619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:34.305 [2024-11-20 11:10:23.386649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.440 ms 00:33:34.305 [2024-11-20 11:10:23.386659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.386696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.386708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:34.305 [2024-11-20 11:10:23.386718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:34.305 [2024-11-20 11:10:23.386727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.386862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.386875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:34.305 [2024-11-20 11:10:23.386886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:34.305 [2024-11-20 11:10:23.386895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.387011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.387024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:34.305 [2024-11-20 11:10:23.387034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:33:34.305 [2024-11-20 11:10:23.387044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.405743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.405776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:34.305 [2024-11-20 11:10:23.405789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.709 ms 00:33:34.305 [2024-11-20 11:10:23.405814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.405938] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:34.305 [2024-11-20 11:10:23.405953] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:34.305 [2024-11-20 11:10:23.405964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.405977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:34.305 [2024-11-20 11:10:23.405987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:34.305 [2024-11-20 11:10:23.405996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.416555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.416585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:34.305 [2024-11-20 11:10:23.416608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.560 ms 00:33:34.305 [2024-11-20 11:10:23.416617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.416735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.416746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:34.305 [2024-11-20 11:10:23.416756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:33:34.305 [2024-11-20 11:10:23.416771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.416818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.416829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:34.305 [2024-11-20 11:10:23.416839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:34.305 [2024-11-20 11:10:23.416849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.417571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.417595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:34.305 [2024-11-20 11:10:23.417606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:33:34.305 [2024-11-20 11:10:23.417616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.417780] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:34.305 [2024-11-20 11:10:23.417795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.417806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:34.305 [2024-11-20 11:10:23.417817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:34.305 [2024-11-20 11:10:23.417827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.429079] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:34.305 [2024-11-20 11:10:23.429381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.429400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:34.305 [2024-11-20 11:10:23.429412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.550 ms 00:33:34.305 [2024-11-20 11:10:23.429422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.431298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.431329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:34.305 [2024-11-20 11:10:23.431341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.855 ms 00:33:34.305 [2024-11-20 11:10:23.431351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.431433] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:34.305 [2024-11-20 11:10:23.431827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.431840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:34.305 [2024-11-20 11:10:23.431851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:33:34.305 [2024-11-20 11:10:23.431860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.431891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.431902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:34.305 [2024-11-20 11:10:23.431912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:34.305 [2024-11-20 11:10:23.431921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.431953] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:34.305 [2024-11-20 11:10:23.431964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.431974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:34.305 [2024-11-20 11:10:23.431983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:34.305 [2024-11-20 11:10:23.431993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.467668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.467819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:34.305 [2024-11-20 11:10:23.467893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.714 ms 00:33:34.305 [2024-11-20 11:10:23.467929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.468020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.305 [2024-11-20 11:10:23.468059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:34.305 [2024-11-20 11:10:23.468091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:34.305 [2024-11-20 11:10:23.468121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.305 [2024-11-20 11:10:23.469196] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 194.756 ms, result 0 00:33:35.685  [2024-11-20T11:10:25.876Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-20T11:10:26.889Z] Copying: 51/1024 [MB] (26 MBps) [2024-11-20T11:10:27.827Z] Copying: 77/1024 [MB] (26 MBps) [2024-11-20T11:10:28.763Z] Copying: 104/1024 [MB] (26 MBps) [2024-11-20T11:10:29.700Z] Copying: 130/1024 [MB] (26 MBps) [2024-11-20T11:10:31.140Z] Copying: 156/1024 [MB] (26 MBps) [2024-11-20T11:10:31.709Z] Copying: 183/1024 [MB] (26 MBps) [2024-11-20T11:10:33.088Z] Copying: 209/1024 [MB] (26 MBps) [2024-11-20T11:10:34.025Z] Copying: 236/1024 [MB] (26 MBps) [2024-11-20T11:10:34.962Z] Copying: 262/1024 [MB] (26 MBps) [2024-11-20T11:10:35.898Z] Copying: 288/1024 [MB] (26 MBps) [2024-11-20T11:10:36.836Z] Copying: 314/1024 [MB] (26 MBps) [2024-11-20T11:10:37.773Z] Copying: 340/1024 [MB] (25 MBps) [2024-11-20T11:10:38.711Z] Copying: 366/1024 [MB] (25 MBps) [2024-11-20T11:10:40.089Z] Copying: 392/1024 [MB] (26 MBps) [2024-11-20T11:10:40.657Z] Copying: 417/1024 [MB] (25 MBps) [2024-11-20T11:10:42.034Z] Copying: 443/1024 [MB] (26 MBps) [2024-11-20T11:10:42.972Z] Copying: 469/1024 [MB] (26 MBps) [2024-11-20T11:10:43.909Z] Copying: 496/1024 [MB] (26 MBps) [2024-11-20T11:10:44.846Z] Copying: 521/1024 [MB] (25 MBps) [2024-11-20T11:10:45.783Z] Copying: 547/1024 [MB] (26 MBps) [2024-11-20T11:10:46.720Z] Copying: 573/1024 [MB] (25 MBps) [2024-11-20T11:10:47.658Z] Copying: 599/1024 [MB] (25 MBps) [2024-11-20T11:10:49.037Z] Copying: 625/1024 [MB] (25 MBps) [2024-11-20T11:10:49.973Z] Copying: 651/1024 [MB] (26 MBps) [2024-11-20T11:10:50.910Z] Copying: 676/1024 [MB] (24 MBps) [2024-11-20T11:10:51.848Z] Copying: 701/1024 [MB] (25 MBps) [2024-11-20T11:10:52.784Z] Copying: 727/1024 [MB] (25 MBps) [2024-11-20T11:10:53.721Z] Copying: 753/1024 [MB] (25 MBps) [2024-11-20T11:10:54.658Z] Copying: 779/1024 [MB] (25 MBps) [2024-11-20T11:10:56.036Z] Copying: 804/1024 [MB] (25 MBps) [2024-11-20T11:10:56.974Z] Copying: 830/1024 [MB] (26 MBps) [2024-11-20T11:10:57.958Z] Copying: 856/1024 [MB] (25 MBps) [2024-11-20T11:10:58.896Z] Copying: 882/1024 [MB] (25 MBps) [2024-11-20T11:10:59.832Z] Copying: 908/1024 [MB] (25 MBps) [2024-11-20T11:11:00.768Z] Copying: 934/1024 [MB] (25 MBps) [2024-11-20T11:11:01.705Z] Copying: 960/1024 [MB] (26 MBps) [2024-11-20T11:11:02.643Z] Copying: 986/1024 [MB] (26 MBps) [2024-11-20T11:11:03.210Z] Copying: 1013/1024 [MB] (26 MBps) [2024-11-20T11:11:03.495Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 11:11:03.214060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.242 [2024-11-20 11:11:03.214148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:14.242 [2024-11-20 11:11:03.214176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:14.242 [2024-11-20 11:11:03.214195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.242 [2024-11-20 11:11:03.214233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:14.242 [2024-11-20 11:11:03.220917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.242 [2024-11-20 11:11:03.221106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:14.242 [2024-11-20 11:11:03.221133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:34:14.242 [2024-11-20 11:11:03.221148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.242 [2024-11-20 11:11:03.221418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.242 [2024-11-20 11:11:03.221437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:14.242 [2024-11-20 11:11:03.221450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:34:14.242 [2024-11-20 11:11:03.221462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.242 [2024-11-20 11:11:03.221493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.242 [2024-11-20 11:11:03.221506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:14.242 [2024-11-20 11:11:03.221519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:14.242 [2024-11-20 11:11:03.221531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.242 [2024-11-20 11:11:03.221762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.242 [2024-11-20 11:11:03.221781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:14.242 [2024-11-20 11:11:03.221793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:34:14.242 [2024-11-20 11:11:03.221805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.242 [2024-11-20 11:11:03.221842] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:14.242 [2024-11-20 11:11:03.221863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:14.242 [2024-11-20 11:11:03.221878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.221992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:14.242 [2024-11-20 11:11:03.222508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.222531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.223943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:14.243 [2024-11-20 11:11:03.224960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:14.243 [2024-11-20 11:11:03.224972] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1b367e70-c7da-47b1-b21d-8a8452023d94 00:34:14.243 [2024-11-20 11:11:03.224985] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:14.243 [2024-11-20 11:11:03.224998] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2336 00:34:14.243 [2024-11-20 11:11:03.225009] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2304 00:34:14.243 [2024-11-20 11:11:03.225022] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0139 00:34:14.243 [2024-11-20 11:11:03.225039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:14.243 [2024-11-20 11:11:03.225052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:14.243 [2024-11-20 11:11:03.225063] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:14.243 [2024-11-20 11:11:03.225074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:14.243 [2024-11-20 11:11:03.225084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:14.243 [2024-11-20 11:11:03.225097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.243 [2024-11-20 11:11:03.225110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:14.243 [2024-11-20 11:11:03.225122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:34:14.243 [2024-11-20 11:11:03.225134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.243 [2024-11-20 11:11:03.244135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.243 [2024-11-20 11:11:03.244173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:14.243 [2024-11-20 11:11:03.244192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.000 ms 00:34:14.243 [2024-11-20 11:11:03.244201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.243 [2024-11-20 11:11:03.244825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.243 [2024-11-20 11:11:03.244841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:14.243 [2024-11-20 11:11:03.244853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:34:14.243 [2024-11-20 11:11:03.244863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.243 [2024-11-20 11:11:03.293839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.243 [2024-11-20 11:11:03.293876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:14.243 [2024-11-20 11:11:03.293889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.243 [2024-11-20 11:11:03.293898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.243 [2024-11-20 11:11:03.293950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.243 [2024-11-20 11:11:03.293960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:14.243 [2024-11-20 11:11:03.293969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.243 [2024-11-20 11:11:03.293978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.243 [2024-11-20 11:11:03.294030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.243 [2024-11-20 11:11:03.294047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:14.243 [2024-11-20 11:11:03.294056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.243 [2024-11-20 11:11:03.294065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.243 [2024-11-20 11:11:03.294080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.243 [2024-11-20 11:11:03.294090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:14.243 [2024-11-20 11:11:03.294100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.243 [2024-11-20 11:11:03.294109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.243 [2024-11-20 11:11:03.410186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.243 [2024-11-20 11:11:03.410253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:14.243 [2024-11-20 11:11:03.410267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.243 [2024-11-20 11:11:03.410277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.502 [2024-11-20 11:11:03.507978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.502 [2024-11-20 11:11:03.508159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:14.502 [2024-11-20 11:11:03.508198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.502 [2024-11-20 11:11:03.508209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.502 [2024-11-20 11:11:03.508303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.502 [2024-11-20 11:11:03.508317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:14.502 [2024-11-20 11:11:03.508332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.502 [2024-11-20 11:11:03.508342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.502 [2024-11-20 11:11:03.508379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.502 [2024-11-20 11:11:03.508390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:14.502 [2024-11-20 11:11:03.508400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.502 [2024-11-20 11:11:03.508410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.502 [2024-11-20 11:11:03.508510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.502 [2024-11-20 11:11:03.508524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:14.502 [2024-11-20 11:11:03.508535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.502 [2024-11-20 11:11:03.508549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.502 [2024-11-20 11:11:03.508615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.502 [2024-11-20 11:11:03.508630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:14.502 [2024-11-20 11:11:03.508641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.502 [2024-11-20 11:11:03.508651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.502 [2024-11-20 11:11:03.508689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.502 [2024-11-20 11:11:03.508700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:14.502 [2024-11-20 11:11:03.508710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.502 [2024-11-20 11:11:03.508724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.502 [2024-11-20 11:11:03.508762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:14.502 [2024-11-20 11:11:03.508774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:14.502 [2024-11-20 11:11:03.508785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:14.502 [2024-11-20 11:11:03.508795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.502 [2024-11-20 11:11:03.508907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 295.305 ms, result 0 00:34:15.440 00:34:15.440 00:34:15.440 11:11:04 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:17.347 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 84251 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 84251 ']' 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 84251 00:34:17.347 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84251) - No such process 00:34:17.347 Process with pid 84251 is not found 00:34:17.347 Remove shared memory files 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 84251 is not found' 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_band_md /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_l2p_l1 /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_l2p_l2 /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_l2p_l2_ctx /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_nvc_md /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_p2l_pool /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_sb /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_sb_shm /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_trim_bitmap /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_trim_log /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_trim_md /dev/hugepages/ftl_1b367e70-c7da-47b1-b21d-8a8452023d94_vmap 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:17.347 ************************************ 00:34:17.347 END TEST ftl_restore_fast 00:34:17.347 ************************************ 00:34:17.347 00:34:17.347 real 3m16.247s 00:34:17.347 user 3m4.798s 00:34:17.347 sys 0m12.705s 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:17.347 11:11:06 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:17.347 11:11:06 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:17.347 11:11:06 ftl -- ftl/ftl.sh@14 -- # killprocess 76485 00:34:17.347 Process with pid 76485 is not found 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@954 -- # '[' -z 76485 ']' 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@958 -- # kill -0 76485 00:34:17.347 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76485) - No such process 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76485 is not found' 00:34:17.347 11:11:06 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:17.347 11:11:06 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:17.347 11:11:06 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86269 00:34:17.347 11:11:06 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86269 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@835 -- # '[' -z 86269 ']' 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:17.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:17.347 11:11:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:17.347 [2024-11-20 11:11:06.484378] Starting SPDK v25.01-pre git sha1 a5dab6cf7 / DPDK 24.03.0 initialization... 00:34:17.347 [2024-11-20 11:11:06.485116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86269 ] 00:34:17.607 [2024-11-20 11:11:06.662993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.607 [2024-11-20 11:11:06.770280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.544 11:11:07 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.544 11:11:07 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:18.544 11:11:07 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:18.804 nvme0n1 00:34:18.804 11:11:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:18.804 11:11:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:18.804 11:11:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:19.063 11:11:08 ftl -- ftl/common.sh@28 -- # stores=976df8ff-b8b5-4826-bb25-83a0f875f10c 00:34:19.063 11:11:08 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:19.063 11:11:08 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 976df8ff-b8b5-4826-bb25-83a0f875f10c 00:34:19.063 11:11:08 ftl -- ftl/ftl.sh@23 -- # killprocess 86269 00:34:19.063 11:11:08 ftl -- common/autotest_common.sh@954 -- # '[' -z 86269 ']' 00:34:19.063 11:11:08 ftl -- common/autotest_common.sh@958 -- # kill -0 86269 00:34:19.063 11:11:08 ftl -- common/autotest_common.sh@959 -- # uname 00:34:19.063 11:11:08 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:19.063 11:11:08 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86269 00:34:19.322 killing process with pid 86269 00:34:19.322 11:11:08 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:19.322 11:11:08 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:19.322 11:11:08 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86269' 00:34:19.322 11:11:08 ftl -- common/autotest_common.sh@973 -- # kill 86269 00:34:19.322 11:11:08 ftl -- common/autotest_common.sh@978 -- # wait 86269 00:34:21.857 11:11:10 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:21.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:21.857 Waiting for block devices as requested 00:34:21.857 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:22.116 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:22.116 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:22.375 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:27.693 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:27.693 Remove shared memory files 00:34:27.693 11:11:16 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:27.693 11:11:16 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:27.693 11:11:16 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:27.693 11:11:16 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:27.693 11:11:16 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:27.693 11:11:16 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:27.693 11:11:16 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:27.693 ************************************ 00:34:27.693 END TEST ftl 00:34:27.693 ************************************ 00:34:27.693 00:34:27.693 real 14m42.709s 00:34:27.693 user 17m1.273s 00:34:27.693 sys 1m38.691s 00:34:27.693 11:11:16 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.693 11:11:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:27.693 11:11:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:27.693 11:11:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:27.693 11:11:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:27.693 11:11:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:27.693 11:11:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:27.693 11:11:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:27.693 11:11:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:27.693 11:11:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:27.693 11:11:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:27.693 11:11:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:27.693 11:11:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.693 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:34:27.693 11:11:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:27.693 11:11:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:27.693 11:11:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:27.693 11:11:16 -- common/autotest_common.sh@10 -- # set +x 00:34:29.601 INFO: APP EXITING 00:34:29.601 INFO: killing all VMs 00:34:29.601 INFO: killing vhost app 00:34:29.601 INFO: EXIT DONE 00:34:30.167 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:30.735 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:30.735 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:30.735 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:30.735 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:31.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:31.562 Cleaning 00:34:31.562 Removing: /var/run/dpdk/spdk0/config 00:34:31.562 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:31.562 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:31.562 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:31.562 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:31.562 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:31.562 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:31.562 Removing: /var/run/dpdk/spdk0 00:34:31.562 Removing: /var/run/dpdk/spdk_pid57433 00:34:31.562 Removing: /var/run/dpdk/spdk_pid57674 00:34:31.562 Removing: /var/run/dpdk/spdk_pid57903 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58007 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58063 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58191 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58209 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58419 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58536 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58643 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58765 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58873 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58913 00:34:31.562 Removing: /var/run/dpdk/spdk_pid58949 00:34:31.821 Removing: /var/run/dpdk/spdk_pid59021 00:34:31.821 Removing: /var/run/dpdk/spdk_pid59148 00:34:31.821 Removing: /var/run/dpdk/spdk_pid59590 00:34:31.821 Removing: /var/run/dpdk/spdk_pid59665 00:34:31.821 Removing: /var/run/dpdk/spdk_pid59739 00:34:31.821 Removing: /var/run/dpdk/spdk_pid59755 00:34:31.821 Removing: /var/run/dpdk/spdk_pid59906 00:34:31.821 Removing: /var/run/dpdk/spdk_pid59927 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60078 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60099 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60163 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60181 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60251 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60269 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60464 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60506 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60589 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60778 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60875 00:34:31.821 Removing: /var/run/dpdk/spdk_pid60917 00:34:31.821 Removing: /var/run/dpdk/spdk_pid61368 00:34:31.821 Removing: /var/run/dpdk/spdk_pid61466 00:34:31.821 Removing: /var/run/dpdk/spdk_pid61581 00:34:31.821 Removing: /var/run/dpdk/spdk_pid61639 00:34:31.821 Removing: /var/run/dpdk/spdk_pid61665 00:34:31.821 Removing: /var/run/dpdk/spdk_pid61748 00:34:31.821 Removing: /var/run/dpdk/spdk_pid62391 00:34:31.821 Removing: /var/run/dpdk/spdk_pid62439 00:34:31.821 Removing: /var/run/dpdk/spdk_pid62926 00:34:31.821 Removing: /var/run/dpdk/spdk_pid63035 00:34:31.821 Removing: /var/run/dpdk/spdk_pid63154 00:34:31.821 Removing: /var/run/dpdk/spdk_pid63208 00:34:31.821 Removing: /var/run/dpdk/spdk_pid63234 00:34:31.821 Removing: /var/run/dpdk/spdk_pid63259 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65159 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65307 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65311 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65323 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65373 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65377 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65389 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65434 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65443 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65455 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65501 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65505 00:34:31.821 Removing: /var/run/dpdk/spdk_pid65517 00:34:31.821 Removing: /var/run/dpdk/spdk_pid66936 00:34:31.821 Removing: /var/run/dpdk/spdk_pid67044 00:34:31.822 Removing: /var/run/dpdk/spdk_pid68484 00:34:31.822 Removing: /var/run/dpdk/spdk_pid70226 00:34:31.822 Removing: /var/run/dpdk/spdk_pid70311 00:34:31.822 Removing: /var/run/dpdk/spdk_pid70392 00:34:31.822 Removing: /var/run/dpdk/spdk_pid70498 00:34:31.822 Removing: /var/run/dpdk/spdk_pid70603 00:34:31.822 Removing: /var/run/dpdk/spdk_pid70701 00:34:32.081 Removing: /var/run/dpdk/spdk_pid70786 00:34:32.081 Removing: /var/run/dpdk/spdk_pid70862 00:34:32.081 Removing: /var/run/dpdk/spdk_pid70972 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71070 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71166 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71251 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71332 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71437 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71529 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71630 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71710 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71786 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71895 00:34:32.081 Removing: /var/run/dpdk/spdk_pid71992 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72088 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72172 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72246 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72323 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72397 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72509 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72603 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72699 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72788 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72865 00:34:32.081 Removing: /var/run/dpdk/spdk_pid72946 00:34:32.081 Removing: /var/run/dpdk/spdk_pid73030 00:34:32.081 Removing: /var/run/dpdk/spdk_pid73133 00:34:32.081 Removing: /var/run/dpdk/spdk_pid73228 00:34:32.081 Removing: /var/run/dpdk/spdk_pid73383 00:34:32.081 Removing: /var/run/dpdk/spdk_pid73673 00:34:32.081 Removing: /var/run/dpdk/spdk_pid73717 00:34:32.081 Removing: /var/run/dpdk/spdk_pid74174 00:34:32.081 Removing: /var/run/dpdk/spdk_pid74359 00:34:32.081 Removing: /var/run/dpdk/spdk_pid74459 00:34:32.081 Removing: /var/run/dpdk/spdk_pid74569 00:34:32.081 Removing: /var/run/dpdk/spdk_pid74628 00:34:32.081 Removing: /var/run/dpdk/spdk_pid74659 00:34:32.081 Removing: /var/run/dpdk/spdk_pid74944 00:34:32.081 Removing: /var/run/dpdk/spdk_pid75010 00:34:32.081 Removing: /var/run/dpdk/spdk_pid75101 00:34:32.081 Removing: /var/run/dpdk/spdk_pid75528 00:34:32.081 Removing: /var/run/dpdk/spdk_pid75678 00:34:32.081 Removing: /var/run/dpdk/spdk_pid76485 00:34:32.081 Removing: /var/run/dpdk/spdk_pid76628 00:34:32.081 Removing: /var/run/dpdk/spdk_pid76850 00:34:32.081 Removing: /var/run/dpdk/spdk_pid76957 00:34:32.081 Removing: /var/run/dpdk/spdk_pid77274 00:34:32.081 Removing: /var/run/dpdk/spdk_pid77522 00:34:32.081 Removing: /var/run/dpdk/spdk_pid77886 00:34:32.081 Removing: /var/run/dpdk/spdk_pid78088 00:34:32.081 Removing: /var/run/dpdk/spdk_pid78229 00:34:32.081 Removing: /var/run/dpdk/spdk_pid78287 00:34:32.081 Removing: /var/run/dpdk/spdk_pid78436 00:34:32.081 Removing: /var/run/dpdk/spdk_pid78461 00:34:32.081 Removing: /var/run/dpdk/spdk_pid78525 00:34:32.081 Removing: /var/run/dpdk/spdk_pid78739 00:34:32.081 Removing: /var/run/dpdk/spdk_pid78971 00:34:32.081 Removing: /var/run/dpdk/spdk_pid79433 00:34:32.340 Removing: /var/run/dpdk/spdk_pid79906 00:34:32.340 Removing: /var/run/dpdk/spdk_pid80380 00:34:32.340 Removing: /var/run/dpdk/spdk_pid80900 00:34:32.340 Removing: /var/run/dpdk/spdk_pid81060 00:34:32.340 Removing: /var/run/dpdk/spdk_pid81147 00:34:32.340 Removing: /var/run/dpdk/spdk_pid81770 00:34:32.341 Removing: /var/run/dpdk/spdk_pid81834 00:34:32.341 Removing: /var/run/dpdk/spdk_pid82300 00:34:32.341 Removing: /var/run/dpdk/spdk_pid82676 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83184 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83311 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83366 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83430 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83486 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83550 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83743 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83827 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83893 00:34:32.341 Removing: /var/run/dpdk/spdk_pid83969 00:34:32.341 Removing: /var/run/dpdk/spdk_pid84009 00:34:32.341 Removing: /var/run/dpdk/spdk_pid84077 00:34:32.341 Removing: /var/run/dpdk/spdk_pid84251 00:34:32.341 Removing: /var/run/dpdk/spdk_pid84487 00:34:32.341 Removing: /var/run/dpdk/spdk_pid84947 00:34:32.341 Removing: /var/run/dpdk/spdk_pid85373 00:34:32.341 Removing: /var/run/dpdk/spdk_pid85809 00:34:32.341 Removing: /var/run/dpdk/spdk_pid86269 00:34:32.341 Clean 00:34:32.341 11:11:21 -- common/autotest_common.sh@1453 -- # return 0 00:34:32.341 11:11:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:32.341 11:11:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.341 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:34:32.341 11:11:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:32.341 11:11:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.341 11:11:21 -- common/autotest_common.sh@10 -- # set +x 00:34:32.600 11:11:21 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:32.600 11:11:21 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:32.600 11:11:21 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:32.600 11:11:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:32.600 11:11:21 -- spdk/autotest.sh@398 -- # hostname 00:34:32.600 11:11:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:32.600 geninfo: WARNING: invalid characters removed from testname! 00:34:59.155 11:11:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:59.155 11:11:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:01.693 11:11:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:03.601 11:11:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:05.508 11:11:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:08.046 11:11:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:09.955 11:11:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:09.955 11:11:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:09.955 11:11:58 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:09.955 11:11:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:09.955 11:11:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:09.956 11:11:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:09.956 + [[ -n 5245 ]] 00:35:09.956 + sudo kill 5245 00:35:09.966 [Pipeline] } 00:35:09.982 [Pipeline] // timeout 00:35:09.987 [Pipeline] } 00:35:10.002 [Pipeline] // stage 00:35:10.007 [Pipeline] } 00:35:10.021 [Pipeline] // catchError 00:35:10.031 [Pipeline] stage 00:35:10.033 [Pipeline] { (Stop VM) 00:35:10.045 [Pipeline] sh 00:35:10.327 + vagrant halt 00:35:12.862 ==> default: Halting domain... 00:35:19.443 [Pipeline] sh 00:35:19.725 + vagrant destroy -f 00:35:22.265 ==> default: Removing domain... 00:35:22.845 [Pipeline] sh 00:35:23.127 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:23.135 [Pipeline] } 00:35:23.147 [Pipeline] // stage 00:35:23.151 [Pipeline] } 00:35:23.162 [Pipeline] // dir 00:35:23.166 [Pipeline] } 00:35:23.178 [Pipeline] // wrap 00:35:23.183 [Pipeline] } 00:35:23.194 [Pipeline] // catchError 00:35:23.202 [Pipeline] stage 00:35:23.204 [Pipeline] { (Epilogue) 00:35:23.216 [Pipeline] sh 00:35:23.496 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:28.785 [Pipeline] catchError 00:35:28.787 [Pipeline] { 00:35:28.800 [Pipeline] sh 00:35:29.084 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:29.084 Artifacts sizes are good 00:35:29.094 [Pipeline] } 00:35:29.108 [Pipeline] // catchError 00:35:29.119 [Pipeline] archiveArtifacts 00:35:29.127 Archiving artifacts 00:35:29.283 [Pipeline] cleanWs 00:35:29.319 [WS-CLEANUP] Deleting project workspace... 00:35:29.319 [WS-CLEANUP] Deferred wipeout is used... 00:35:29.350 [WS-CLEANUP] done 00:35:29.352 [Pipeline] } 00:35:29.364 [Pipeline] // stage 00:35:29.369 [Pipeline] } 00:35:29.380 [Pipeline] // node 00:35:29.385 [Pipeline] End of Pipeline 00:35:29.412 Finished: SUCCESS